Dataloader-rs - Rust implementation of Facebook's DataLoader using async-await.

Overview

Dataloader

Rust Crates.io

Rust implementation of Facebook's DataLoader using async-await.

Documentation

Features

  • Batching load requests with caching
  • Batching load requests without caching

Usage

Switching runtime, by using cargo features

  • runtime-async-std (default), to use the async-std runtime
    • dataloader = "0.14"
  • runtime-tokio to use the Tokio runtime
    • dataloader = { version = "0.14", default-features = false, features = ["runtime-tokio"]}

Add to your Cargo.toml:

[dependencies]
dataloader = "0.14"
futures = "0.3"
async-trait = "0.1"

Example:

use async_trait::async_trait;
use dataloader::cached::Loader;
use dataloader::BatchFn;
use futures::executor::block_on;
use std::collections::HashMap;
use std::thread;

struct MyLoadFn;

#[async_trait]
impl BatchFn<usize, usize> for MyLoadFn {
    async fn load(&mut self, keys: &[usize]) -> HashMap<usize, usize> {
        println!("BatchFn load keys {:?}", keys);
        keys.iter()
            .map(|v| (v.clone(), v.clone()))
            .collect::<HashMap<_, _>>()
    }
}

fn main() {
    let mut i = 0;
    while i < 2 {
        let a = MyLoadFn;
        let loader = Loader::new(a).with_max_batch_size(4);

        let l1 = loader.clone();
        let h1 = thread::spawn(move || {
            let r1 = l1.load(1);
            let r2 = l1.load(2);
            let r3 = l1.load(3);

            let r4 = l1.load_many(vec![2, 3, 4, 5, 6, 7, 8]);
            let f = futures::future::join4(r1, r2, r3, r4);
            println!("{:?}", block_on(f));
        });

        let l2 = loader.clone();
        let h2 = thread::spawn(move || {
            let r1 = l2.load(1);
            let r2 = l2.load(2);
            let r3 = l2.load(3);
            let r4 = l2.load(4);
            let f = futures::future::join4(r1, r2, r3, r4);
            println!("{:?}", block_on(f));
        });

        h1.join().unwrap();
        h2.join().unwrap();
        i += 1;
    }
}

LICENSE

This project is licensed under either of

at your option.

Comments
  • Inconsistent execution results

    Inconsistent execution results

    I've noticed that sometime batches works, and some times not... I'm using a very simple schema at the moment, sending the same request multiple time can give different results in how the loader is executed:

    query foo {
      orders {
        id
        user { id }
      }
    }
    

    I'm using a loader on the orders.user field:

    #[juniper::graphql_object(Context = Context)]
    impl Order {
        fn id(&self) -> juniper::ID {
            self.id.to_string().into()
        }
    
        async fn user(&self, ctx: &Context) -> types::User {
            ctx.loaders.user.load(self.user.clone()).await.unwrap()
        }
    }
    

    and the UserLoader is basically a copy past of the example in juniper doc

    db contains 2 orders, both having the same user field. Here is some logs of the same request send multiple times:

     DEBUG hyper::proto::h1::io                > read 643 bytes
     DEBUG hyper::proto::h1::io                > parsed 14 headers
     DEBUG hyper::proto::h1::conn              > incoming body is content-length (109 bytes)
     DEBUG hyper::proto::h1::conn              > incoming body completed
     DEBUG syos::graphql::utils::loaders::user > load batch [ObjectId(593fdd2ba9c0edf74ff0b38c), ObjectId(593fdd2ba9c0edf74ff0b38c)]
     INFO  GraphQL                             > 127.0.0.1:45080 "POST /graphql HTTP/1.1" 200 "http://127.0.0.1:4444/graphiql" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36" 9.229761ms
     DEBUG hyper::proto::h1::io                > flushed 372 bytes
    
    
     DEBUG hyper::proto::h1::io                > read 643 bytes
     DEBUG hyper::proto::h1::io                > parsed 14 headers
     DEBUG hyper::proto::h1::conn              > incoming body is content-length (109 bytes)
     DEBUG hyper::proto::h1::conn              > incoming body completed
     DEBUG syos::graphql::utils::loaders::user > load batch [ObjectId(593fdd2ba9c0edf74ff0b38c)]
     DEBUG syos::graphql::utils::loaders::user > load batch [ObjectId(593fdd2ba9c0edf74ff0b38c)]
     INFO  GraphQL                             > 127.0.0.1:45080 "POST /graphql HTTP/1.1" 200 "http://127.0.0.1:4444/graphiql" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36" 12.157163ms
     DEBUG hyper::proto::h1::io                > flushed 372 bytes
    
    
     DEBUG hyper::proto::h1::io                > read 643 bytes
     DEBUG hyper::proto::h1::io                > parsed 14 headers
     DEBUG hyper::proto::h1::conn              > incoming body is content-length (109 bytes)
     DEBUG hyper::proto::h1::conn              > incoming body completed
     DEBUG syos::graphql::utils::loaders::user > load batch [ObjectId(593fdd2ba9c0edf74ff0b38c), ObjectId(593fdd2ba9c0edf74ff0b38c)]
     INFO  GraphQL                             > 127.0.0.1:45080 "POST /graphql HTTP/1.1" 200 "http://127.0.0.1:4444/graphiql" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36" 12.120952ms
     DEBUG hyper::proto::h1::io                > flushed 372 bytes
    
    
     DEBUG hyper::proto::h1::io                > read 643 bytes
     DEBUG hyper::proto::h1::io                > parsed 14 headers
     DEBUG hyper::proto::h1::conn              > incoming body is content-length (109 bytes)
     DEBUG hyper::proto::h1::conn              > incoming body completed
     DEBUG syos::graphql::utils::loaders::user > load batch [ObjectId(593fdd2ba9c0edf74ff0b38c), ObjectId(593fdd2ba9c0edf74ff0b38c)]
     INFO  GraphQL                             > 127.0.0.1:45080 "POST /graphql HTTP/1.1" 200 "http://127.0.0.1:4444/graphiql" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36" 10.009887ms
     DEBUG hyper::proto::h1::io                > flushed 372 bytes
    

    We can see that some times load batch is called with 2 ids, some times called twice with the same id.

    More infos:

    • tokio runtime (tried both threaded_scheduler and basic_scheduler with the same result)
    • warp http server
    • juniper master branch
    • mongodb database (for which the driver is not async yet)
    • context is created per request (and so are loaders)

    I can provide more code if necessary, or maybe even a simple repo to reproduce the issue.

    opened by IcanDivideBy0 19
  • How to use it with Juniper?

    How to use it with Juniper?

    Before Rust 1.39, I successfully used this library. But with the advent of async/await, I was completely confused. Below is an example of my broken code:

    #[juniper::object(Context = Context)]
    impl Article {
        fn id(&self) -> i32 {
            self.id
        }
    
        fn title(&self) -> &str {
            self.title.as_str()
        }
    
        async fn authors (&self, context: &Context) -> FieldResult<Vec<Author>> {
            let authors = context.authors_loader().load_many(self.author_ids.clone());
            Ok(authors.await)
        }
    }
    

    I'm trying to load an authors of the article with dataloader, but I got confused with the return types and how to get authors from the dataloader. The compiler gives me an error:

    error[E0728]: `await` is only allowed inside `async` functions and blocks
      --> src/schema.rs:52:12
       |
    40 | #[juniper::object(Context = Context)]
       | ------------------------------------- this is not `async`
    ...
    52 |         Ok(authors.await)
       |            ^^^^^^^^^^^^^ only allowed inside `async` functions and blocks
    
    
    error[E0308]: mismatched types
      --> src/schema.rs:52:12
       |
    52 |         Ok(authors.await)
       |            ^^^^^^^^^^^^^ expected struct `std::vec::Vec`, found enum `std::result::Result`
       |
       = note: expected type `std::vec::Vec<_>`
                  found type `std::result::Result<std::vec::Vec<_>, dataloader::LoadError<()>>`
    
    
    error: aborting due to 2 previous errors
    
    opened by acelot 13
  • Using dataloader with juniper (again)

    Using dataloader with juniper (again)

    I have returned to my abandoned GraphQL project :) I'm trying to solve N+1 problem using dataloader-rs along with juniper and actix-web. I have two simple entities: Article and Author. One article have many authors. Below I will show the most important places of my program.

    Actix handler:

    use crate::graphql::context::Context;
    use crate::graphql::schema::{create_schema, Schema};
    use actix_web::{web, Error, HttpResponse};
    use juniper::hProblemttp::GraphQLRequest;
    use std::sync::Arc;
    
    async fn graphql(
        ctx: web::Data<Context>,
        schema: web::Data<Arc<Schema>>,
        req: web::Json<GraphQLRequest>,
    ) -> Result<HttpResponse, Error> {
        let response = req.execute(&schema, &ctx).await;
        let json = serde_json::to_string(&response)?;
        Ok(HttpResponse::Ok()
            .content_type("application/json")
            .body(json))
    }
    

    Authors dataloader:

    use crate::environment::db::Database;
    use crate::models::author::Author;
    use dataloader::BatchFn;
    use std::collections::HashMap;
    use tokio_postgres::Row;
    
    pub struct AuthorsLoader {
        pub db: Database,
    }
    
    #[async_trait::async_trait]
    impl BatchFn<i32, Author> for AuthorsLoader {
        async fn load(&self, keys: &[i32]) -> HashMap<i32, Author> {
            let client = self.db.get_client().await.unwrap();
    
            let map: HashMap<i32, Author> = client
                .query(
                    "SELECT * FROM articles.authors WHERE id = ANY($1)",
                    &[&keys.to_vec()],
                )
                .await
                .unwrap()
                .into_iter()
                .map(|r: Row| (r.get("id"), Author::from(r)))
                .into_iter()
                .collect();
    
            map
        }
    }
    

    Article type resolver:

    use crate::{graphql::context::Context, models::{author::Author, article::Article}};
    use itertools::Itertools;
    
    #[juniper::graphql_object(Context = Context)]
    impl Article {
        fn id(&self) -> &i32 {
            &self.id
        }
    
        fn title(&self) -> &str {
            &self.title
        }
    
        async fn authors(&self, ctx: &Context) -> Vec<Author> {
            let a = ctx.authors_loader
                .load_many(self.author_ids.clone())
                .await
                .values()
                .cloned()
                .collect_vec();
    
            a
        }
    }
    

    Deps:

    [dependencies]
    log = "0.4"
    chrono = "0.4"
    actix-web = "2.0"
    actix-rt = "1.0"
    env_logger = "0.7"
    serde = { version = "1.0", features = ["derive"] }
    serde_json = "1.0"
    juniper = { git = "https://github.com/graphql-rust/juniper" }
    dotenv = "0.15"
    tokio = "0.2"
    tokio-postgres = "0.5"
    dataloader = "0.12"
    futures = "0.3"
    async-trait = "0.1"
    itertools = "0.9"
    

    Problem

    The panic occurs when the server tries to load authors through load_many method:

    thread 'actix-rt:worker:0' panicked at 'found key 0 in load result', <::std::macros::panic macros>:5:6
    

    What I'm doing wrong?

    opened by acelot 7
  • Rework engine and remove tokio

    Rework engine and remove tokio

    Related to #3
    Continues #4

    Overview

    This PR introduces a new batch loading implementation using futures only and removes the necessity of using tokio crate.

    Motivation

    Current batch loading implementation has the following drawbacks:

    • Thread spawning. Each time Loader is created a new thread is spawned. As dataloader is a scope-based thing (lives in context of a concrete request) it means creating a new thread for every request, which has significant performance impact on a whole application.
    • Inevitability of tokio usage. Loader creates new tokio::runtime::current_thread in a spawned thread. This means that library user has to use this runtime and cannot choose the desired runtime and guarantees for executing loading futures.
    • Eager loading. Loader::load() sends task for loading to a separate thread via channel on its invocation. This means that loading may be performed even if returned LoadFuture has been never spawned. This conflicts with idiomatic Rust vision of futures being lazy: "Future does nothing unless polled".
    • Non-deterministic batching. As tasks for loading are queued via channels to a separate thread which runs concurrently, the exact batch formation and execution depends on how concurrent threads are synchronized at a concrete moment in time, which leads to different batching results and guarantees on same inputs. For example, example/simple.rs performs differently on each execution:
      -- Using Cached Loader --
      load batch [3, 4]
      load batch [30, 35, 40, 45, 50]
      
       -- Using Cached Loader --
      load batch [3, 4]
      load batch [30, 35, 40]
      load batch [45, 50]
      

    Algorithm

    The key point of dataloader implementation is an ability to defer loading until all the possible tasks for loading has been gathered. There is no general algorithm afaik, and each implementation does this trick on its own. For example, graphql/dataloader uses Node.js-specific API or setTimeout hack for scheduling loading to the end of event loop, and graph-gophers/dataloader uses time.After hack too. ~~However, in Rust we are not required to poke with hacks as std::futures allows us to use Waker directly in Future, so we can easily reschedule future to the end of event loop.~~
    Upd: In Rust we should use a hack with Delay::new(Duration::from_nanos(1)) as using Waker directly does not defers future quite well.

    This PR's implementation:

    • ~~Uses Waker to reschedule enqueued loading task to the end of event loop, so that loading happens only when nothing more can be scheduled. Despite the fact that scheduled futures can be executed concurrently on ThreadPool, any new loading tasks are added to the end of event loop, so batching and loading have deterministic guarantees.~~
      Upd: Uses Delay::new(Duration::from_nanos(1)) to defer enqueued loading task asap. This, however, does not guarantees good deferring for batch loading, that's why the next step is required.
    • ~~Does not enqueue anything if the Future returned by Loader::load() is never spawned. So meets the idiom "Future does nothing unless polled".~~
      Upd: Eagerly enqueues loading task directly in Loader::load(). This partially violates the semantic "future does nothing unless polled". See why.
    • Performs loading directly on the executor where the Future returned by Loader::load() is spawned. So it's up to user to choose the desired runtime.

    Caveats

    • Implementation internally uses HashMap (as a lot of index operation are performed), so the order of keys passed into BatchFn::load() is not historical (enqueuing order) due to HashMap having no ordering. This should not introduce any issue, however, as BatchFn implementation shouldn't depend on such things.

    • ~~As loading is more lazy now, the .cached() version doesn't work always the way it has worked before. For example, example/simple.rs performs for this way now:~~

       -- Using Cached Loader --
      load batch [3, 4]
      load batch [30, 45, 40, 40, 35, 50]
      

      ~~As we see, no cache reads are involved as all keys of second stage are loaded at once.~~
      ~~Currently, keys are not restricted with any trait bounds at all. In next PR I'll introduce dedupe adaptors for K: PartialEq keys, so we can omit passing duplicate keys to BatchFn::load().~~

    • A new type parameter F: BatchFn is imposed in Loader type. This, however, feels right as disallows mixing Loaders with different BatchFn on a type level (compile error).

    Checklist

    • [x] Code is formatted with rustfmt.
    • [x] Tests are updated and pass (removed tokio usage)
    • [x] Example is updated (removed tokio usage) and new tokio example is created
    • [x] README and crate description are updated

    Future work

    • Try to lift more trait bounds for library's type params and reconsider library api being more idiomatic with std::futures (non-error futures, etc).
    • Introduce sort of LocalLoader to omit threads synchronization costs for use-cases where all happens in a single thread (relatively easy, just replace Arc<Mutex<State>> with Rc<RefCell<State>>).
    • Add documentation, more examples and tests which check batching guarantees.
    opened by tyranron 7
  • Consider adding prime_many to loader interface

    Consider adding prime_many to loader interface

    Ex when a search result returns not just the keys but the data itself, it'd be good to populate the cache from the response. Thus any new request to the already know values can be omitted. As I see prime can be used to add values to the cache, but it performs a lock on each addition. prime_many could accept an iterator of (K,V) pairs and could hold the lock for the entire update.

    Or is there any other option to fill/add to the loader a known value ?

    opened by gzp-crey 4
  • Why dataloader is handling errors?

    Why dataloader is handling errors?

    Hello, I'm wondering why cached loader have to handle errors, I don't see any particular treatment done on the results in this crate. Wouldn't it be simpler if loader can handle whatever value users want it to handle.

    Loader could be declared as follows:

    pub struct Loader<K, V, F, C = HashMap<K, V>>
    where
        K: Eq + Hash + Clone,
        V: Clone,
        F: BatchFn<K, V>,
        C: Cache<Key = K, Val = V>,
    

    This would still allow for anyone to put a (cloneable) Result as output value.

    In particular I'd like to use the loader to return a Vec<Result<_, _>> for each key given to the BatchFn; What do you think about this design change ?

    opened by IcanDivideBy0 4
  • Upgrading to std Futures and latest tokio

    Upgrading to std Futures and latest tokio

    Hi, there! Thank you for this crate 🙇

    I'd like to use it extensively, but is it still maintained? The latest change was made at Aug 4 2018 and master depends on tokio-core, which is deprecated long ago.

    How about upgrading to the latest tokio and refactoring with std Futures which will come in the nearest 1.36 Rust release? Would you mind on my elaboration on it?

    opened by tyranron 3
  • Specify fields in key, or allow separate load/cache keys

    Specify fields in key, or allow separate load/cache keys

    I am wanting to only load requested fields from the database. For example, take this GraphQL type:

    type Person {
      id
      name
      birthday
    }
    

    With the following request:

    person {
      id
      name
    }
    

    Rather than doing SELECT * FROM people, I want to only get the id and name fields; not birthday.

    It seems that in the official Node based module they recommend using separate cache and load keys. This means BatchFn might use (PersonID, Vec<String>) as its ID, but Loader would use PersonID. Multiple load keys map to one cache key; so it'd be up to the BatchFn implementation to dedup the person/fields combinations.

    opened by oeed 2
  • how to pass argument to load method

    how to pass argument to load method

    I am using async_graphql and dataloader and it works fine.

    But when I want to limit output using last argument. I do not know how to do this with dataloader.

    authors {
     name
     books(last: 30)
    }
    

    Getting books is called like this in graphql object ( without data loader )

    pub async fn books(
        &self,
        ctx: &Context<'_>,
        last: i32,
    ) -> Result<Vec<Books>, AppError> {
        ctx.data_unchecked::<BooksRepository>()
            .get_for_id(self.id, last)
            .await
    }
    

    When using dataloader I do not know how to make it work.

    pub async fn books(
        &self,
        ctx: &Context<'_>,
        last: i32,
    ) -> Result<Vec<books>, AppError> {
        let loader = ctx.data_unchecked::<BooksLoader>();
        loader.load(self.id).await
    }
    

    Maybe I just overlooked something or just do not have mu day, but I do not know how to solve it. Thanks for suggestion.

    opened by fbucek 2
  • About BatchFn load function returns

    About BatchFn load function returns

    Hi, cksac

    Maybe return Result<HashMap<K, V>, dataloader::Error> is better in here. most times we should query db in this function like this. what do you think?

    opened by vkill 2
  • Doesn't build: the trait futures_core::stream::Stream is not implemented for ext::TimeoutStream<S>

    Doesn't build: the trait futures_core::stream::Stream is not implemented for ext::TimeoutStream

    Not exactly sure what is wrong here, probably something with futures having changed.

    /U/d/De/dataloader-rs [master@0248d52e50a2d]
    $ rustup override set nightly
    info: using existing install for 'nightly-x86_64-apple-darwin'
    info: override toolchain for '/Users/davidpdrsn/Desktop/dataloader-rs' set to 'nightly-x86_64-apple-darwin'
    
      nightly-x86_64-apple-darwin unchanged - rustc 1.40.0-nightly (e413dc36a 2019-10-14)
    
    
    /U/d/De/dataloader-rs [master@0248d52e50a2d]
    $ cargo build
       Compiling futures-timer v0.2.1
    error[E0277]: the trait bound `ext::TimeoutStream<S>: futures_core::stream::Stream` is not satisfied
       --> /Users/davidpdrsn/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-timer-0.2.1/src/ext.rs:170:9
        |
    170 | impl<S> TryStream for TimeoutStream<S>
        |         ^^^^^^^^^ the trait `futures_core::stream::Stream` is not implemented for `ext::TimeoutStream<S>`
    
    error: aborting due to previous error
    
    For more information about this error, try `rustc --explain E0277`.
    error: could not compile `futures-timer`.
    
    To learn more, run the command again with --verbose.
    
    opened by davidpdrsn 2
Owner
cksac
cksac
Cassandra DB native client written in Rust language. Find 1.x versions on https://github.com/AlexPikalov/cdrs/tree/v.1.x Looking for an async version? - Check WIP https://github.com/AlexPikalov/cdrs-async

CDRS CDRS is looking for maintainers CDRS is Apache Cassandra driver written in pure Rust. ?? Looking for an async version? async-std https://github.c

Alex Pikalov 338 Jan 1, 2023
📺 Netflix in Rust/ React-TS/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Kafka, Redis, Tokio, Actix, Elasticsearch, Influxdb Iox, Tensorflow, AWS

Fullstack Movie Streaming Platform ?? Netflix in RUST/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Spark, Kafka, Redis,

null 34 Apr 17, 2023
Notification demon + web server using async Rust

Async Rust example Road to the asynchronous Rust Table of Contents About the Project Screenshots Tech Stack Features Getting Started Prerequisites Clo

Edem Khadiev 4 Feb 9, 2023
Go like sync.WaitGroup implementation in Rust. (sync/async)

WAG Go like sync.WaitGroup implementation in Rust. (sync/async) | Examples | Docs | Latest Note | wag = "0.3.0" How to use, use wag::WaitGroup; let w

Doha Lee 2 Dec 15, 2022
Async Lightweight HTTP client using system native library if possible. (Currently under heavy development)

Async Lightweight HTTP Client (aka ALHC) What if we need async but also lightweight http client without using such a large library like reqwest, isahc

SteveXMH 7 Dec 15, 2022
Diesel async connection implementation

A async interface for diesel Diesel gets rid of the boilerplate for database interaction and eliminates runtime errors without sacrificing performance

Georg Semmler 293 Dec 26, 2022
🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, SQLite, and MSSQL.

SQLx ?? The Rust SQL Toolkit Install | Usage | Docs Built with ❤️ by The LaunchBadge team SQLx is an async, pure Rust† SQL crate featuring compile-tim

launchbadge 7.6k Dec 31, 2022
TDS 7.2+ (mssql / Microsoft SQL Server) async driver for rust

Tiberius A native Microsoft SQL Server (TDS) client for Rust. Supported SQL Server versions Version Support level Notes 2019 Tested on CI 2017 Tested

Prisma 189 Dec 25, 2022
Simple, async embedded Rust

Cntrlr - Simple, asynchronous embedded Cntrlr is an all-in-one embedded platform for writing simple asynchronous applications on top of common hobbyis

Branan Riley 11 Jun 3, 2021
Rust async runtime based on io-uring.

Monoio A thread-per-core Rust runtime with io_uring. 中文说明 Design Goal As a runtime based on io_uring, Monoio is designed to be the most efficient and

Bytedance Inc. 2.4k Jan 6, 2023
🐚 An async & dynamic ORM for Rust

SeaORM ?? An async & dynamic ORM for Rust SeaORM SeaORM is a relational ORM to help you build web services in Rust with the familiarity of dynamic lan

SeaQL 3.5k Jan 6, 2023
Quick Pool: High Performance Rust Async Resource Pool

Quick Pool High Performance Rust Async Resource Pool Usage DBCP Database Backend Adapter Version PostgreSQL tokio-postgres qp-postgres Example use asy

Seungjae Park 13 Aug 23, 2022
High-level async Cassandra client written in 100% Rust.

CDRS tokio CDRS is production-ready Apache Cassandra driver written in pure Rust. Focuses on providing high level of configurability to suit most use

Kamil Rojewski 73 Dec 26, 2022
Automatically deleted async I/O temporary files in Rust

async-tempfile Provides the TempFile struct, an asynchronous wrapper based on tokio::fs for temporary files that will be automatically deleted when th

Markus Mayer 3 Jan 4, 2023
An async Rust client for SurrealDB's RPC endpoint

An async Rust client for SurrealDB's RPC endpoint This crate serves as a temporary yet complete implementation of an async Rust client to connect to a

Thibault H 12 Jan 21, 2023
An async-ready Phoenix Channels v2 client library in Rust

Phoenix Channels This crate implements a Phoenix Channels (v2) client in Rust. Status NOTE: This client is still a work-in-progress, though it has eno

LiveView Native 22 Jan 7, 2023
Lightweight async Redis client with connection pooling written in pure Rust and 100% memory safe

redi-rs (or redirs) redi-rs is a Lightweight Redis client with connection pooling written in Rust and 100% memory safe redi-rs is a Redis client writt

Oğuz Türkay 4 May 20, 2023
An async executor based on the Win32 thread pool API

wae An async executor based on the Win32 thread pool API use futures::channel::oneshot; #[wae::main] async fn main() { let (tx, rx) = oneshot::ch

Raphaël Thériault 10 Dec 10, 2021
Async positioned I/O with io_uring.

uring-positioned-io Fully asynchronized positioned I/O with io_uring. Basic Usage let files = vec![File::open("test.txt").unwrap()]; let context = Uri

Alex Chi 30 Aug 24, 2022