Thread Safe Cache with async loader functions based on tokio-rs

Overview

cache-loader-async

crates.io

The goal of this crate is to provide a thread-safe and easy way to access any data structure which might is stored in a database at most once and keep it in cache for further requests.

This library is based on tokio-rs and futures.

Usage

Using this library is as easy as that:

#[tokio::main]
async fn main() {
    let static_db: HashMap<String, u32> =
        vec![("foo".into(), 32), ("bar".into(), 64)]
            .into_iter()
            .collect();
    
    let (cache, _) = LoadingCache::new(move |key: String| {
        let db_clone = static_db.clone();
        async move {
            db_clone.get(&key).cloned().ok_or("error-message")
        }
    });

    let result = cache.get("foo".to_owned()).await.unwrap().0;

    assert_eq!(result, 32);
}

The LoadingCache will first try to look up the result in an internal HashMap and if it's not found and there's no load ongoing, it will fire the load request and queue any other get requests until the load request finishes.

Features & Cache Backings

The cache-loader-async library currently supports two additional inbuilt backings: LRU & TTL LRU evicts keys based on the cache maximum size, while TTL evicts keys automatically after their TTL expires.

LRU Backing

You can use a simple pre-built LRU cache from the lru-rs crate by enabling the lru-cache feature.

To create a LoadingCache with lru cache backing use the with_backing method on the LoadingCache.

async fn main() {
    let size: usize = 10;
    let (cache, _) = LoadingCache::with_backing(LruCacheBacking::new(size), move |key: String| {
        async move {
            Ok(key.to_lowercase())
        }
    });
}

TTL Backing

You can use a simple pre-build TTL cache by enabling the ttl-cache feature. This will not require any additional dependencies.

To create a LoadingCache with ttl cache backing use the with_backing method on the LoadingCache.

async fn main() {
    let duration: Duration = Duration::from_secs(30);
    let (cache, _) = LoadingCache::with_backing(TtlCacheBacking::new(duration), move |key: String| {
        async move {
            Ok(key.to_lowercase())
        }
    });
}

Own Backing

To implement an own cache backing, simply implement the public CacheBacking trait from the backing mod.

pub trait CacheBacking
    where K: Eq + Hash + Sized + Clone + Send,
          V: Sized + Clone + Send {
    fn get(&mut self, key: &K) -> Option<&V>;
    fn set(&mut self, key: K, value: V) -> Option;
    fn remove(&mut self, key: &K) -> Option;
    fn contains_key(&self, key: &K) -> bool;
}
You might also like...
Thread-safe cell based on atomic pointers to externally stored data

Simple thread-safe cell PtrCell is an atomic cell type that allows safe, concurrent access to shared data. No std, no data races, no nasal demons (UB)

A Rust on-site channel benchmarking helper. Inter-Process (async / busy) & Intra-Process (async single threaded / async multi threaded)

On-Site Rust Channel Benchmarking Helper Deploy on server to determine which public crates are the fastest for communicating in different architecture

Provides utility functions to perform a graceful shutdown on an tokio-rs based service

tokio-graceful-shutdown IMPORTANT: This crate is in an early stage and not ready for production. This crate provides utility functions to perform a gr

A generational arena based LRU Cache implementation in 100% safe rust.

generational-lru Crate providing a 100% safe, generational arena based LRU cache implementation. use generational_lru::lrucache::{LRUCache, CacheError

Fast, initializable, and thread safe static variables

tagged_cell Fast, initializable, and thread safe static variables Overview Borrows the excellent ZST based tagging implementation (linked below) to gu

A simple, stable and thread-safe implementation of a lazy value

Laizy Laizy is a Rust library that provides a simple, stable and thread-safe implementation of a Lazy Features Name Description Dependencies nightly A

Thread-safe clone-on-write container for fast concurrent writing and reading.

sync_cow Thread-safe clone-on-write container for fast concurrent writing and reading. SyncCow is a container for concurrent writing and reading of da

Linked Atomic Random Insert Vector: a thread-safe, self-memory-managed vector with no guaranteed sequential insert.
Linked Atomic Random Insert Vector: a thread-safe, self-memory-managed vector with no guaranteed sequential insert.

Linked Atomic Random Insert Vector Lariv is a thread-safe, self-memory-managed vector with no guaranteed sequential insert. It internally uses a linke

Fork of async-raft, the Tokio-based Rust implementation of the Raft protocol.

Agreed Fork of async-raft, the Tokio-based Rust implementation of the Raft distributed consensus protocol. Agreed is an implementation of the Raft con

Simple crate that wraps a tokio::process into a tokio::stream

tokio-process-stream tokio-process-stream is a simple crate that wraps a tokio::process into a tokio::stream Having a stream interface to processes is

A set of safe Least Recently Used (LRU) map/cache types for Rust

LruMap A set of safe Least-Recently-Used (LRU) cache types aimed at providing flexible map-like structures that automatically evict the least recently

A high level async Redis client for Rust built on Tokio and Futures.

A high level async Redis client for Rust built on Tokio and Futures.

Russh - Async (tokio) SSH2 client and server rimplementation

Russh Async (tokio) SSH2 client and server rimplementation. This is a fork of Thrussh by Pierre-Étienne Meunier which adds: More safety guarantees AES

An actors library for Rust and Tokio designed to work with async / await message handlers out of the box.

Akt An actors framework for Rust and Tokio. It is heavily inspired by Actix and right now it has very similar look and feel. The main difference is th

Graceful shutdown util for Rust projects using the Tokio Async runtime.

Shutdown management for graceful shutdown of tokio applications. Guard creating and usage is lock-free and the crate only locks when: the shutdown sig

Async Rust cron scheduler running on Tokio.

Grizzly Cron Scheduler A simple and easy to use scheduler, built on top of Tokio, that allows you to schedule async tasks using cron expressions (with

Fault-tolerant Async Actors Built on Tokio
Fault-tolerant Async Actors Built on Tokio

Kameo 🧚🏻 Fault-tolerant Async Actors Built on Tokio Async: Built on tokio, actors run asyncronously in their own isolated spawned tasks. Supervision

Const equivalents of many [`bytemuck`] functions, and a few additional const functions.

Const equivalents of many bytemuck functions, and a few additional const functions. constmuck uses bytemuck's traits, so any type that implements thos

Tokio based client library for the Android Debug Bridge (adb) based on mozdevice

forensic-adb Tokio based client library for the Android Debug Bridge (adb) based on mozdevice for Rust. Documentation This code has been extracted fro

Comments
  • [question] TtlCacheBacking - SystemTime or Instant?

    [question] TtlCacheBacking - SystemTime or Instant?

    Hi,

    I was looking at this crate, evaluating it for my use. While examining the code of TtlCacheBacking I've noticed that it's using SystemTime in order to determine the age of a cached entry.

    My question is, why SystemTime instead of Instance? Was this due to a problem which you've encountered?

    The reason for my questions is a caveat with SystemTime which each call to SystemTime::now() translates to a system call (at least on Linux) - a relatively slow operation. Under heavy load, the affect of this can be significant. Instant, on the other hand, does not suffer from the same problem.

    opened by tsnoam 4
  • Only turn on tokio features that are used by this crate

    Only turn on tokio features that are used by this crate

    Hi! It looks like this crate doesn't actually need all of tokio's features that are turned on by full-- I turned them all off and then turned them back on one-by-one until the tests passed.

    This way, crates that depend on this one won't be forced to use full by feature unification.

    Thanks!

    opened by carols10cents 1
  • Feedback!

    Feedback!

    Thanks for providing this library!

    I couldn't see how to feedback on my experience, but I just wanted to convey that using the library has been very pleasant. Here's a code snippet of how I set up my cache to read secrets from Hashicorp's Vault:

    let cache: TtlCache = LoadingCache::with_meta_loader(
        TtlCacheBacking::with_backing(
            *unauthorized_timeout,
            LruCacheBacking::new(max_secrets_cached),
        ),
        move |secret_path| {
            let task_state = Arc::clone(&state);
            async move {
                let result = task_state
                    .client
                    .get(
                        task_state
                            .server
                            .join(&format!(
                                "{}v1/secret/data/{}",
                                task_state.server, secret_path
                            ))
                            .unwrap(),
                    )
                    .bearer_auth(&task_state.client_token)
                    .send()
                    .await;
                if let Ok(response) = result {
                    let secret_reply = if response.status().is_success() {
                        response.json::<GetSecretReply>().await.ok()
                    } else {
                        None
                    };
                    let lease_duration = secret_reply.as_ref().map(|sr| {
                        let mut lease_duration = None;
                        if let Some(ttl_field) = task_state.ttl_field.as_ref() {
                            if let Some(ttl) = sr.data.data.get(ttl_field) {
                                if let Ok(ttl_duration) = ttl.parse::<humantime::Duration>() {
                                    lease_duration = Some(ttl_duration.into());
                                }
                            }
                        }
                        lease_duration.unwrap_or_else(|| Duration::from_secs(sr.lease_duration))
                    });
                    Ok(secret_reply).with_meta(lease_duration.map(TtlMeta::from))
                } else {
                    Err(Error::Unavailable)
                }
            }
        },
    );
    

    Feel free to use this as a more (incomplete) complex example if that helps. Thanks once again!

    opened by huntc 1
Releases(v0.2.1)
  • v0.2.1(Sep 9, 2022)

    What's Changed

    • upgrade lru dependency to 0.7.8 by @uwemaurer in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/15

    New Contributors

    • @uwemaurer made their first contribution in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/15

    Full Changelog: https://github.com/ZeroTwo-Bot/cache-loader-async-rs/compare/v0.2.0...v0.2.1

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 23, 2022)

    What's Changed

    • Replace SystemTime with Instant by @ByteAlex in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/8
    • Remove CacheHandle, Implement cache clear by @ByteAlex in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/9
    • Only turn on tokio features that are used by this crate by @carols10cents in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/10
    • Improve ttl backing by @ByteAlex in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/11
    • Backing improvements by @ByteAlex in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/12
    • Open up TTL backing for nested backings by @ByteAlex in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/13

    New Contributors

    • @carols10cents made their first contribution in https://github.com/ZeroTwo-Bot/cache-loader-async-rs/pull/10

    Full Changelog: https://github.com/ZeroTwo-Bot/cache-loader-async-rs/compare/v0.1.1...v0.2.0

    Source code(tar.gz)
    Source code(zip)
Owner
ZeroTwo Bot
Github Repository for the Discord bot "ZeroTwo"
ZeroTwo Bot
Fast multi-producer, multi-consumer unbounded channel with async support.

Hyperbridge Fast multi-producer, multi-consumer unbounded channel with async support. Inspired by crossbeam unbounded channel. Examples Hyperbridge::c

Anton 1 Apr 20, 2022
[no longer maintained] Scalable, coroutine-based, fibers/green-threads for Rust. (aka MIO COroutines).

Documentation mioco Mioco provides green-threads (aka fibers) like eg. Goroutines in Go, for Rust. Status This repo is a complete re-implementation of

Dawid Ciężarkiewicz 137 Dec 19, 2022
Mix async code with CPU-heavy thread pools using Tokio + Rayon

tokio-rayon Mix async code with CPU-heavy thread pools using Tokio + Rayon Resources Documentation crates.io TL;DR Sometimes, you're doing async stuff

Andy Barron 74 Jan 2, 2023
Rustato: A powerful, thread-safe global state management library for Rust applications, offering type-safe, reactive state handling with an easy-to-use macro-based API.

Rustato State Manager A generical thread-safe global state manager for Rust Introduction • Features • Installation • Usage • Advanced Usage • Api Refe

BiteCraft 8 Sep 16, 2024
Utilities for tokio/tokio-uring based async IO

dbs-fuse The dbs-fuse is a utility crate to support fuse-backend-rs. Wrappers for Rust async io It's challenging to support Rust async io, and it's ev

OpenAnolis Community 6 Oct 23, 2022
Lagoon is a thread pool crate that aims to address many of the problems with existing thread pool crates.

Lagoon is a thread pool crate that aims to address many of the problems with existing thread pool crates. Example Lagoon's scoped jobs can be u

Joshua Barretto 29 Dec 27, 2022
📺 Netflix in Rust/ React-TS/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Kafka, Redis, Tokio, Actix, Elasticsearch, Influxdb Iox, Tensorflow, AWS

Fullstack Movie Streaming Platform ?? Netflix in RUST/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Spark, Kafka, Redis,

null 34 Apr 17, 2023
An async executor based on the Win32 thread pool API

wae An async executor based on the Win32 thread pool API use futures::channel::oneshot; #[wae::main] async fn main() { let (tx, rx) = oneshot::ch

Raphaël Thériault 10 Dec 10, 2021
A thread-safe signal/slot library based on boost::signals2

About signals2 is a thread-safe signal/slot library based on the boost::signals2 C++ library. Signals are objects that contain a list of callback func

Christian Daley 15 Dec 21, 2022
Simple, thread-safe, counter based progress logging

?? proglog Documentation Crates.io This is a simple, thread-safe, count-based, progress logger. Synopsis proglog hooks into your existing log implemen

Seth 5 Nov 7, 2022