Rust cache structures and easy function memoization

Overview

Build Status

cached

Build Status crates.io docs

Caching structures and simplified function memoization

cached provides implementations of several caching structures as well as a handy macro for defining memoized functions.

Memoized functions defined using #[cached]/cached! macros are thread-safe with the backing function-cache wrapped in a mutex. The function-cache is not locked for the duration of the function's execution, so initial (on an empty cache) concurrent calls of long-running functions with the same arguments will each execute fully and each overwrite the memoized value as they complete. This mirrors the behavior of Python's functools.lru_cache.

See cached::stores docs for details about the cache stores available.

Features

  • proc_macro: (default) pull in proc macro support
  • async: (default) Add CachedAsync trait

Defining memoized functions using macros, #[cached] & cached!

Notes on the proc-macro version #[cached]

  • enabled by default, but can be disabled by specifying default-features = false (if you aren't using it and don't want to have to compile syn)
  • supports more features at this point than the original collection of cached! macros do
  • works with async functions
  • see cached_proc_macro/src/lib.rs and examples below for more details on macro arguments
  • see examples/kitchen_sink_proc_macro.rs for basic usage

The basic usage looks like:

use cached::proc_macro::cached;

/// Defines a function named `fib` that uses a cache implicitly named `FIB`.
/// By default, the cache will be the function's in all caps.
/// The following line is equivalent to #[cached(name = "FIB", unbound)]
#[cached]
fn fib(n: u64) -> u64 {
    if n == 0 || n == 1 { return n }
    fib(n-1) + fib(n-2)
}
use std::thread::sleep;
use std::time::Duration;
use cached::proc_macro::cached;

/// Use an lru cache with size 100 and a `(String, String)` cache key
#[cached(size=100)]
fn keyed(a: String, b: String) -> usize {
    let size = a.len() + b.len();
    sleep(Duration::new(size as u64, 0));
    size
}
use std::thread::sleep;
use std::time::Duration;
use cached::proc_macro::cached;

/// Use a timed-lru cache with size 1, a TTL of 60s,
/// and a `(usize, usize)` cache key
#[cached(size=1, time=60)]
fn keyed(a: usize, b: usize) -> usize {
    let total = a + b;
    sleep(Duration::new(total as u64, 0));
    total
}
pub fn main() {
    keyed(1, 2);  // Not cached, will sleep (1+2)s

    keyed(1, 2);  // Cached, no sleep

    sleep(Duration::new(60, 0));  // Sleep for the TTL

    keyed(1, 2);  // 60s TTL has passed so the cached
                  // value has expired, will sleep (1+2)s

    keyed(1, 2);  // Cached, no sleep

    keyed(2, 1);  // New args, not cached, will sleep (2+1)s

    keyed(1, 2);  // Was evicted because of lru size of 1,
                  // will sleep (1+2)s
}
use std::thread::sleep;
use std::time::Duration;
use cached::proc_macro::cached;

/// Use a timed cache with a TTL of 60s
/// and a `(String, String)` cache key
#[cached(time=60)]
fn keyed(a: String, b: String) -> usize {
    let size = a.len() + b.len();
    sleep(Duration::new(size as u64, 0));
    size
}
use cached::proc_macro::cached;


/// Cache a fallible function. Only `Ok` results are cached.
#[cached(size=1, result = true)]
fn keyed(a: String) -> Result<usize, ()> {
    do_something_fallible()?;
    Ok(a.len())
}
use cached::proc_macro::cached;

/// Cache an optional function. Only `Some` results are cached.
#[cached(size=1, option = true)]
fn keyed(a: String) -> Option<usize> {
    if a == "a" {
        Some(a.len())
    } else {
        None
    }
}
use cached::proc_macro::cached;
use cached::Return;

/// Get a `cached::Return` value that indicates
/// whether the value returned came from the cache:
/// `cached::Return.was_cached`.
/// Use an LRU cache with a TTL of 60s
/// and a `String` cache key.
#[cached(size=1, with_cached_flag = true)]
fn calculate(a: String) -> Return<String> {
    Return::new(a)
}
pub fn main() {
    let r = calculate("a".to_string());
    assert!(!r.was_cached);
    let r = calculate("a".to_string());
    assert!(r.was_cached);
    // Return<String> derefs to String
    assert_eq!(r.to_uppercase(), "A");
}
use cached::proc_macro::cached;
use cached::Return;


/// Same as the previous, but returning a Result
#[cached(size=1, result = true, with_cached_flag = true)]
fn calculate(a: String) -> Result<Return<usize>, ()> {
    do_something_fallible()?;
    Ok(Return::new(a.len()))
}
pub fn main() {
    match calculate("a".to_string()) {
        Err(e) => eprintln!("error: {:?}", e),
        Ok(r) => {
            println!("value: {:?}, was cached: {}", *r, r.was_cached);
            // value: "a", was cached: true
        }
    }
}
use cached::proc_macro::cached;
use cached::Return;

/// Same as the previous, but returning an Option
#[cached(size=1, option = true, with_cached_flag = true)]
fn calculate(a: String) -> Option<Return<usize>> {
    if a == "a" {
        Some(Return::new(a.len()))
    } else {
        None
    }
}
pub fn main() {
    if let Some(a) = calculate("a".to_string()) {
        println!("value: {:?}, was cached: {}", *a, a.was_cached);
        // value: "a", was cached: true
    }
}
use std::thread::sleep;
use std::time::Duration;
use cached::proc_macro::cached;
use cached::SizedCache;

/// Use an explicit cache-type with a custom creation block and custom cache-key generating block
#[cached(
    type = "SizedCache<String, usize>",
    create = "{ SizedCache::with_size(100) }",
    convert = r#"{ format!("{}{}", a, b) }"#
)]
fn keyed(a: &str, b: &str) -> usize {
    let size = a.len() + b.len();
    sleep(Duration::new(size as u64, 0));
    size
}

#[cached]/cached! defined functions will have their results cached using the function's arguments as a key (or a specific expression when using cached_key!). When a cached! defined function is called, the function's cache is first checked for an already computed (and still valid) value before evaluating the function body.

Due to the requirements of storing arguments and return values in a global cache:

  • Function return types must be owned and implement Clone
  • Function arguments must either be owned and implement Hash + Eq + Clone OR the cached_key! macro must be used to convert arguments into an owned + Hash + Eq + Clone type.
  • Arguments and return values will be cloned in the process of insertion and retrieval.
  • #[cached]/cached! functions should not be used to produce side-effectual results!
  • #[cached]/cached! functions cannot live directly under impl blocks since cached! expands to a once_cell initialization and a function definition.
  • #[cached]/cached! functions cannot accept Self types as a parameter.

NOTE: Any custom cache that implements cached::Cached can be used with the cached macros in place of the built-ins.

See examples for basic usage of proc-macro & macro-rules macros and an example of implementing a custom cache-store.

cached! and cached_key! Usage & Options:

There are several options depending on how explicit you want to be. See below for a full syntax breakdown.

1.) Using the shorthand will use an unbounded cache.

#[macro_use] extern crate cached;

/// Defines a function named `fib` that uses a cache named `FIB`
cached!{
    FIB;
    fn fib(n: u64) -> u64 = {
        if n == 0 || n == 1 { return n }
        fib(n-1) + fib(n-2)
    }
}

2.) Using the full syntax requires specifying the full cache type and providing an instance of the cache to use. Note that the cache's key-type is a tuple of the function argument types. If you would like fine grained control over the key, you can use the cached_key! macro. The following example uses a SizedCache (LRU):

#[macro_use] extern crate cached;

use std::thread::sleep;
use std::time::Duration;
use cached::SizedCache;

/// Defines a function `compute` that uses an LRU cache named `COMPUTE` which has a
/// size limit of 50 items. The `cached!` macro will implicitly combine
/// the function arguments into a tuple to be used as the cache key.
cached!{
    COMPUTE: SizedCache<(u64, u64), u64> = SizedCache::with_size(50);
    fn compute(a: u64, b: u64) -> u64 = {
        sleep(Duration::new(2, 0));
        return a * b;
    }
}

3.) The cached_key macro functions identically, but allows you to define the cache key as an expression.

#[macro_use] extern crate cached;

use std::thread::sleep;
use std::time::Duration;
use cached::SizedCache;

/// Defines a function named `length` that uses an LRU cache named `LENGTH`.
/// The `Key = ` expression is used to explicitly define the value that
/// should be used as the cache key. Here the borrowed arguments are converted
/// to an owned string that can be stored in the global function cache.
cached_key!{
    LENGTH: SizedCache<String, usize> = SizedCache::with_size(50);
    Key = { format!("{}{}", a, b) };
    fn length(a: &str, b: &str) -> usize = {
        let size = a.len() + b.len();
        sleep(Duration::new(size as u64, 0));
        size
    }
}

4.) The cached_result and cached_key_result macros function similarly to cached and cached_key respectively but the cached function needs to return Result (or some type alias like io::Result). If the function returns Ok(val) then val is cached, but errors are not. Note that only the success type needs to implement Clone, not the error type. When using cached_result and cached_key_result, the cache type cannot be derived and must always be explicitly specified.

#[macro_use] extern crate cached;

use cached::UnboundCache;

/// Cache the successes of a function.
/// To use `cached_key_result` add a key function as in `cached_key`.
cached_result!{
   MULT: UnboundCache<(u64, u64), u64> = UnboundCache::new(); // Type must always be specified
   fn mult(a: u64, b: u64) -> Result<u64, ()> = {
        if a == 0 || b == 0 {
            return Err(());
        } else {
            return Ok(a * b);
        }
   }
}

Syntax

The common macro syntax is:

cached_key!{
    CACHE_NAME: CacheType = CacheInstance;
    Key = KeyExpression;
    fn func_name(arg1: arg_type, arg2: arg_type) -> return_type = {
        // do stuff like normal
        return_type
    }
}

Where:

  • CACHE_NAME is the unique name used to hold a static ref to the cache
  • CacheType is the full type of the cache
  • CacheInstance is any expression that yields an instance of CacheType to be used as the cache-store, followed by ;
  • When using the cached_key! macro, the "Key" line must be specified. This line must start with the literal tokens Key = , followed by an expression that evaluates to the key, followed by ;
  • fn func_name(arg1: arg_type) -> return_type is the same form as a regular function signature, with the exception that functions with no return value must be explicitly stated (e.g. fn func_name(arg: arg_type) -> ())
  • The expression following = is the function body assigned to func_name. Note, the function body can make recursive calls to its cached-self (func_name).

Fine grained control using cached_control!

The cached_control! macro allows you to provide expressions that get plugged into key areas of the memoized function. While the cached and cached_result variants are adequate for most scenarios, it can be useful to have the ability to customize the macro's functionality.

#[macro_use] extern crate cached;

use cached::UnboundCache;

/// The following usage plugs in expressions to make the macro behave like
/// the `cached_result!` macro.
cached_control!{
    CACHE: UnboundCache<String, String> = UnboundCache::new();

    // Use an owned copy of the argument `input` as the cache key
    Key = { input.to_owned() };

    // If a cached value exists, it will bind to `cached_val` and
    // a `Result` will be returned containing a copy of the cached
    // evaluated body. This will return before the function body
    // is executed.
    PostGet(cached_val) = { return Ok(cached_val.clone()) };

    // The result of executing the function body will be bound to
    // `body_result`. In this case, the function body returns a `Result`.
    // We match on the `Result`, returning an early `Err` if the function errored.
    // Otherwise, we pass on the function's result to be cached.
    PostExec(body_result) = {
        match body_result {
            Ok(v) => v,
            Err(e) => return Err(e),
        }
    };

    // When inserting the value into the cache we bind
    // the to-be-set-value to `set_value` and give back a copy
    // of it to be inserted into the cache
    Set(set_value) = { set_value.clone() };

    // Before returning, print the value that will be returned
    Return(return_value) = {
        println!("{}", return_value);
        Ok(return_value)
    };

    fn can_fail(input: &str) -> Result<String, String> = {
        let len = input.len();
        if len < 3 { Ok(format!("{}-{}", input, len)) }
        else { Err("too big".to_string()) }
    }
}

License: MIT

Comments
  • WASM support proposal

    WASM support proposal

    Motivation

    Rust is widely used to improve performance on the web using WASM. By adding WASM support to cached, developers may use memorisation to further improve performance on the web

    Proposed changes

    • Replace std::time::Instant with instant::Instant , which is a light wrapper around std::time::Instant on native platforms
    • Provide a WASM example using the yew framework
    • Add a wasm feature to enable instant/wasm-bindgen feature

    Caveats

    • New dependency added: https://crates.io/crates/instant
    • Test suite requires some minor changes to accommodate for the provided example
    • Missing tests for WASM platforms. It is possible to run wasm-bindgen tests using node.js, but I'll like to discuss the implementation details after receiving some feedback from the maintainer

    More information

    • Relates to https://github.com/jaemk/cached/pull/31
    opened by Altair-Bueno 14
  • Dynamic TimedCache

    Dynamic TimedCache

    First off, thank you for an awesome crate! I'm trying to do something like the following:

    pub fn fn_name(some_parameters, time: u64) -> Result<> {
     cached_key_result! {
                FN_NAME: TimedCache<> = TimedCache::with_lifespan_and_capacity(time, 10);
               fn inner(same_paramters_as_above) -> Result<> {
               ...code...
               }
      }
      inner(original_parameters)
    }
    

    But when I do, I get compiler errors that time isn't a const. I tried writing code to turn time into a const, but that never seemed to workout. So I'm wondering if its possible to have a more dynamic timer for TimedCache.

    Thank you in advance.

    opened by btv 9
  • Cache an async function without calling `.await`

    Cache an async function without calling `.await`

    A question (sorry for any confusion herein) : is there a pattern to be able to cache an async fn() without calling .await if the result is already cached?

    That is, the normal pattern:

    https://github.com/jaemk/cached/blob/f5911dc3fbc03e1db9f87192eb854fac2ee6ac98/examples/async_std.rs#L10-L14

    which is then called with

    https://github.com/jaemk/cached/blob/f5911dc3fbc03e1db9f87192eb854fac2ee6ac98/examples/async_std.rs#L104

    . So even just checking the in-memory cache and returning the cached result - a non-async operation - is handled as an async.

    Why is this an issue? AFAICT, when used with tokio, the call to .await is correctly handled as an i/o event and so the task is slept, in this case unnecessarily. With lots of requests, this is leading to CPU starvation. What I'd like to do is something like manually:

    let res = match cache.has(key) {
        true => cache.get(key),
        false => decorated_fn().await
    };
    // continue with rest of fn
    

    but I'm not sure how I can access the cache global (or maybe need to use a non-proc macro?).

    opened by absoludity 8
  • cached_result! doesn't work with async function

    cached_result! doesn't work with async function

    The following program fails to compile.

    #[tokio::main]
    async fn main() {
        compute_thing(10).await.unwrap();
    }
    
    cached_result! {
        CACHE: cached::TimedSizedCache<i32, i32> = cached::TimedSizedCache::with_size_and_lifespan(10, 10);
        async fn compute_thing(x: i32) -> std::result::Result<i32, String> = {
            Ok(x * x)
        }
    }
    

    The error message is

    error: expected one of `where` or `{`, found `{ Ok(x * x) }`
      --> src/main.rs:22:1
       |
    22 | / cached_result! {
    23 | |     CACHE: cached::TimedSizedCache<i32, i32> = cached::TimedSizedCache::with_size_and_lifespan(10, 10);
    24 | |     async fn compute_thing(x: i32) -> std::result::Result<i32, String> = {
    25 | |         Ok(x * x)
    26 | |     }
    27 | | }
       | |_^ expected one of `where` or `{`
       |
       = note: this error originates in the macro `cached_result` (in Nightly builds, run with -Z macro-backtrace for more info)
    

    It appears to be specific to async because removing async and await. It's like the macro for async case expands to invalid code.

    opened by tsukit 8
  • Support Redis

    Support Redis

    I suggest supporting any network-based cache software, like Redis, Memcached, and ...

    In the k8s environment, in which you may have several small pods working at the same time, caching locally is not helpful enough and will increase memory usage which may end in OOM Kill, having a shared place to cache the data is helpful.

    If you agree, I can start trying to do it. By adding a new store type, RedisCache which I hope will be similar to TimedCache and UnboundCache.

    To support this, I think the return type of the function must implement Serialize and Deserialize.

    opened by omid 8
  • Cache without key

    Cache without key

    What's the idiomatic way to express a cache on a fn with parameters but without a "key"?

    #[cached(time = 5, key = "&str", convert = r#"{ "" }"#)]
    async fn cached_health(db_pool: DBPool) -> Result<(), Error> {
    

    The above works but it looks a bit dirty.

    opened by macthestack 7
  • Add cached_key_result macro for caching success

    Add cached_key_result macro for caching success

    Hi, I recently came across the situation where I was trying to call a function which could fail. The function returned io::Result<Foo> which is not clonable because io::Error is not.

    However Foo itself is so I made a macro that takes a function that returns a result and caches the success value. I only needed cached_key_result but if there's interest in including it I'll happily write cached_result.

    opened by djmcgill 7
  • Option::unwrap() on a None value in cache_set

    Option::unwrap() on a None value in cache_set

    We're caching some results of looking up data in S3, and every few days we get a panic in cached that poisons the internal mutex.

    Our cached definition is this:

    cached_key_result! {
        QUERY: SizedCache<String, Vec<Inventory>> = SizedCache::with_size(100);
        Key = { format!("{}/{}/{}", region, bucket, recording_id) };
        fn cached_query(region: &str, bucket: &str, recording_id: &str) -> Result<Vec<Inventory>> = {
            match do_query(region, bucket, recording_id) {
                Ok(v) => if v.is_empty() {
                    Err(InventoryError::new(400, "No match"))
                } else {
                    Ok(v)
                },
                Err(e) => Err(e)
            }
        }
    }
    

    The crash is this where cached is on line 9:

    Sep 18 21:29:44: thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', libcore/option.rs:345:21
    Sep 18 21:29:44: note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
    Sep 18 21:29:44: stack backtrace:
    Sep 18 21:29:44:    0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
    Sep 18 21:29:44:              at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
    Sep 18 21:29:44:    1: std::sys_common::backtrace::print
    Sep 18 21:29:44:              at libstd/sys_common/backtrace.rs:71
    Sep 18 21:29:44:              at libstd/sys_common/backtrace.rs:59
    Sep 18 21:29:44:    2: std::panicking::default_hook::{{closure}}
    Sep 18 21:29:44:              at libstd/panicking.rs:211
    Sep 18 21:29:44:    3: std::panicking::default_hook
    Sep 18 21:29:44:              at libstd/panicking.rs:227
    Sep 18 21:29:44:    4: std::panicking::rust_panic_with_hook
    Sep 18 21:29:44:              at libstd/panicking.rs:511
    Sep 18 21:29:44:    5: std::panicking::continue_panic_fmt
    Sep 18 21:29:44:              at libstd/panicking.rs:426
    Sep 18 21:29:44:    6: rust_begin_unwind
    Sep 18 21:29:44:              at libstd/panicking.rs:337
    Sep 18 21:29:44:    7: core::panicking::panic_fmt
    Sep 18 21:29:44:              at libcore/panicking.rs:92
    Sep 18 21:29:44:    8: core::panicking::panic
    Sep 18 21:29:44:              at libcore/panicking.rs:53
    Sep 18 21:29:44:    9: <cached::stores::SizedCache<K, V> as cached::Cached<K, V>>::cache_set
    Sep 18 21:29:44:   10: recoordinator::inventory::s3::query
    Sep 18 21:29:44:   11: recoordinator::inventory::s3::query_one
    Sep 18 21:29:44:   12: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &'a mut F>::call_once
    Sep 18 21:29:44:   13: <&'a mut I as core::iter::iterator::Iterator>::next
    Sep 18 21:29:44:   14: <alloc::vec::Vec<T> as alloc::vec::SpecExtend<T, I>>::from_iter
    Sep 18 21:29:44:   15: recoordinator::reel::to_desc::reel_to_desc
    Sep 18 21:29:44:   16: recoordinator::dispatch
    Sep 18 21:29:44:   17: std::panicking::try::do_call
    Sep 18 21:29:44:   18: __rust_maybe_catch_panic
    Sep 18 21:29:44:              at libpanic_unwind/lib.rs:105
    

    This is probably from one of these two rows:

    https://github.com/jaemk/cached/blob/0ed46ce9d6754da572d38617547a7b6b421007a9/src/stores.rs#L124-L125

    I guess one of these unwrap() assumptions doesn't hold true for us?

    opened by lolgesten 6
  • `compile_error!` when attempting to use `redis_store` feature

    `compile_error!` when attempting to use `redis_store` feature

    First off, thanks @jaemk and @omid for adding Redis support!

    So I just changed my cargo.toml cached entry from:

    cached = "0.30"
    

    to

    cached = { version = "0.32", features = ["redis_store"] }
    

    However, I'm getting the following error when I run cargo build:

    tokio-comp or async-std-comp features required for aio feature
       --> /home/jqnatividad/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-0.21.5/src/aio.rs:109:13
        |
    109 |             compile_error!("tokio-comp or async-std-comp features required for aio feature")
        |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    error: tokio-comp or async-std-comp features required for aio feature
       --> /home/jqnatividad/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-0.21.5/src/aio.rs:870:9
        |
    870 |         compile_error!("tokio-comp or async-std-comp features required for aio feature");
        |         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    error[E0433]: failed to resolve: use of undeclared type `ValueCodec`
       --> /home/jqnatividad/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-0.21.5/src/aio.rs:179:9
        |
    179 |         ValueCodec::default()
        |         ^^^^^^^^^^ not found in this scope
        |
    help: consider importing this struct
        |
    2   | use crate::parser::ValueCodec;
        |
    
    error[E0433]: failed to resolve: use of undeclared type `ValueCodec`
       --> /home/jqnatividad/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-0.21.5/src/aio.rs:191:9
        |
    191 |         ValueCodec::default()
        |         ^^^^^^^^^^ not found in this scope
        |
    help: consider importing this struct
        |
    2   | use crate::parser::ValueCodec;
        |
    
    error[E0433]: failed to resolve: use of undeclared type `ValueCodec`
       --> /home/jqnatividad/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-0.21.5/src/aio.rs:220:9
        |
    220 |         ValueCodec::default()
        |         ^^^^^^^^^^ not found in this scope
        |
    help: consider importing this struct
        |
    2   | use crate::parser::ValueCodec;
        |
    
    error[E0433]: failed to resolve: use of undeclared type `ValueCodec`
       --> /home/jqnatividad/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-0.21.5/src/aio.rs:229:9
        |
    229 |         ValueCodec::default()
        |         ^^^^^^^^^^ not found in this scope
        |
    help: consider importing this struct
        |
    2   | use crate::parser::ValueCodec;
        |
    
    error[E0433]: failed to resolve: use of undeclared type `ValueCodec`
       --> /home/jqnatividad/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-0.21.5/src/aio.rs:872:21
        |
    872 |         let codec = ValueCodec::default()
        |                     ^^^^^^^^^^ not found in this scope
        |
    help: consider importing this struct
        |
    2   | use crate::parser::ValueCodec;
        |
    

    looking at the redis dependency:

    https://github.com/jaemk/cached/blob/68f6619c6ae7eef42a85e1bfcae6530b6117d300/Cargo.toml#L70-L73

    It pulls in aio which triggers the compile_error!

    opened by jqnatividad 5
  • Interoperability with smartstring

    Interoperability with smartstring

    First off, thanks for this library. The payoff was immediate, as it doubled the performance of the geocoder I was using.

    As my keys are relatively short (lat/long coordinate), I was wondering if cached can work with smartstring (https://docs.rs/smartstring/0.2.9/smartstring/) to further increase performance.

    opened by jqnatividad 5
  • Separate store types

    Separate store types

    Here I separated the store types to make it clearer and easier to spot issues.

    I had to:

    • Repeat the test methods, which is not bad :)
    • Make some cache store properties, pub(super), but for consistency, I made all of them pub(super)!
    • Create some type aliases in the super module to be backward compatible.

    Now, instead of having a ~2K file, we have 5 files between 250-650 lines.

    PS: This PR is so opinionated, feel free to reject it :)

    opened by omid 5
  • draft for disk cache

    draft for disk cache

    This is a draft for disk cache, still only supports sync, and have incomplete proc macro setup.

    It would be helpful with some feedback one the src/stores/disk.rs code so far, to see if it goes into the correct direction.

    opened by tirithen 0
  • Consider only changing patch versions when making non breaking releases

    Consider only changing patch versions when making non breaking releases

    It would like be great to benefit from bug fixes in cached without having to always update to the latest release.

    Please feel free to close this if you don't think it is something worth considering.

    opened by samanpa 0
  • Mio & Tokio causing wasm build to fail

    Mio & Tokio causing wasm build to fail

    I ran into a problem with the crate not compiling, when using it in conjunction with reqwest. I managed to fix it by adding the resolver = "2" to the [package] / [workspace] toml config. Maybe mention this somewhere in the README?

    error[E0432]: unresolved import `crate::sys::IoSourceState`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/io_source.rs:12:5
       |
    12 | use crate::sys::IoSourceState;
       |     ^^^^^^^^^^^^^^^^^^^^^^^^^ no `IoSourceState` in `sys`
    
    error[E0432]: unresolved import `crate::sys::tcp`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/net/tcp/listener.rs:15:17
       |
    15 | use crate::sys::tcp::{bind, listen, new_for_addr};
       |                 ^^^ could not find `tcp` in `sys`
    
    error[E0432]: unresolved import `crate::sys::tcp`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/net/tcp/stream.rs:13:17
       |
    13 | use crate::sys::tcp::{connect, new_for_addr};
       |                 ^^^ could not find `tcp` in `sys`
    
    error[E0433]: failed to resolve: could not find `Selector` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/poll.rs:301:18
        |
    301 |             sys::Selector::new().map(|selector| Poll {
        |                  ^^^^^^^^ could not find `Selector` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:24:14
       |
    24 |         sys::event::token(&self.inner)
       |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:38:14
       |
    38 |         sys::event::is_readable(&self.inner)
       |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:43:14
       |
    43 |         sys::event::is_writable(&self.inner)
       |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:68:14
       |
    68 |         sys::event::is_error(&self.inner)
       |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
      --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:99:14
       |
    99 |         sys::event::is_read_closed(&self.inner)
       |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:129:14
        |
    129 |         sys::event::is_write_closed(&self.inner)
        |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:151:14
        |
    151 |         sys::event::is_priority(&self.inner)
        |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:173:14
        |
    173 |         sys::event::is_aio(&self.inner)
        |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:183:14
        |
    183 |         sys::event::is_lio(&self.inner)
        |              ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `event` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/event/event.rs:221:26
        |
    221 |                     sys::event::debug_details(f, self.0)
        |                          ^^^^^ could not find `event` in `sys`
    
    error[E0433]: failed to resolve: could not find `tcp` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/net/tcp/listener.rs:103:18
        |
    103 |             sys::tcp::accept(inner).map(|(stream, addr)| (TcpStream::from_std(stream), addr))
        |                  ^^^ could not find `tcp` in `sys`
    
    error[E0433]: failed to resolve: could not find `udp` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/net/udp.rs:122:14
        |
    122 |         sys::udp::bind(addr).map(UdpSocket::from_std)
        |              ^^^ could not find `udp` in `sys`
    
    error[E0433]: failed to resolve: could not find `udp` in `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/net/udp.rs:544:14
        |
    544 |         sys::udp::only_v6(&self.inner)
        |              ^^^ could not find `udp` in `sys`
    
    error[E0412]: cannot find type `Selector` in module `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/poll.rs:255:20
        |
    255 |     selector: sys::Selector,
        |                    ^^^^^^^^ not found in `sys`
    
    error[E0412]: cannot find type `Selector` in module `sys`
       --> /Users/samdenty/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.8.5/src/poll.rs:689:44
        |
    689 |     pub(crate) fn selector(&self) -> &sys::Selector {
        |                                            ^^^^^^^^ not found in `sys`
    
    opened by samdenty 0
  • Cron-like cache clearing

    Cron-like cache clearing

    Hi, is it possible to implement cron-like clearing/resetting our cache? It will be good to clear cache, for example, every hour

    I don't mean the lifetime of the cached items, i mean a direct check for expired from the current time to the next "necessary", for example, clearing the entire cache every 15th minute of every hour, regardless of when the object was cached

    opened by MindMayhem 1
  • Feature request: Skip self field to allow caching methods

    Feature request: Skip self field to allow caching methods

    I know people have asked about skipping arguments before, which can be done with the convert block. But it would be nice to allow caching methods by skipping the self field. Another solution could be tying the cache to the self instance, but I'm sure you've thought of that.

    opened by Serock3 0
  • Is it completely impossible to cache generic functions?

    Is it completely impossible to cache generic functions?

    I have a method that looks roughly like this:

    #[cached(time = 30, key = "String", convert = r#"{ user_id.clone() }"#)]
    async fn read_user<D>(client: Arc<D>, user_id: String) -> Result<User, DataStoragePlatformError>
    where
        D: DynamoDbClient + Send + Sync,
    {
        client.read_user(&user_id).await
    }
    

    When I compile i get the following error that doesn't make sens to me:

    error[E0401]: can't use generic parameters from outer function
      --> project/src/auth/aws.rs:20:35
       |
    20 | async fn read_user<D>(client: Arc<D>, user_id: String) -> Result<User, DataStoragePlatformError>
       |          ------------             ^ use of generic parameter from outer function
       |          |         |
       |          |         type parameter from outer function
       |          help: try using a local generic parameter instead: `read_user<D, D>`
    

    I'm guessing that the macro generates some code that is somehow syntactically incorrect, but I'm uncertain if there is a way to rewrite this so it would work? Stripping generics would be hard since it's supposed to work with both a mock and a real implementation.

    opened by mhvelplund 1
Owner
James Kominick
James Kominick
This is a Rust implementation for HashiCorp's golang-lru. This crate contains three LRU based cache, LRUCache, TwoQueueCache and AdaptiveCache.

This is a Rust implementation for HashiCorp's golang-lru. This crate contains three LRU based cache, LRUCache, TwoQueueCache and AdaptiveCache.

Al Liu 84 Jan 3, 2023
Stretto is a Rust implementation for ristretto. A high performance memory-bound Rust cache.

Stretto is a Rust implementation for ristretto. A high performance memory-bound Rust cache.

Al Liu 310 Dec 29, 2022
A set of safe Least Recently Used (LRU) map/cache types for Rust

LruMap A set of safe Least-Recently-Used (LRU) cache types aimed at providing flexible map-like structures that automatically evict the least recently

Khonsu Labs 4 Sep 24, 2022
A generational arena based LRU Cache implementation in 100% safe rust.

generational-lru Crate providing a 100% safe, generational arena based LRU cache implementation. use generational_lru::lrucache::{LRUCache, CacheError

Arindam Das 37 Dec 21, 2022
A native stateless cache implementation.

fBNC fBNC, Blockchain Native Cache. A native stateless storage library for block chain. Its value is to improve the stability and security of online s

Findora Foundation 1 Jan 12, 2022
Key-Value based in-memory cache library which supports Custom Expiration Policies

Endorphin Key-Value based in-memory cache library which supports Custom Expiration Policies with standard HashMap, HashSet interface. use endorphin::H

Jun Ryung Ju 15 Oct 1, 2022
A general-purpose distributed memory cache system compatible with Memcached

memcrsd memcached server implementation in Rust memcrsd is a key value store implementation in Rust. It is compatible with binary protocol of memcache

null 274 Dec 14, 2022
A lightweight key-value cache system developed for experimental purposes

A lightweight key-value cache system developed for experimental purposes. It can also include distributed systems setup if I can.

Burak Selim Senyurt 8 Jul 23, 2022
ConstDB - an in-memory cache store which aims at master-master replications

A redis-like cache store that implements CRDTs and active-active replications.

null 27 Aug 15, 2022
Read-optimized cache of Cardano on-chain entities

Read-optimized cache of Cardano on-chain entities Intro Scrolls is a tool for building and maintaining read-optimized collections of Cardano's on-chai

TxPipe 58 Dec 2, 2022
A read-only, memory-mapped cache.

mmap-cache A low-level API for a memory-mapped cache of a read-only key-value store. Design The [Cache] index is an [fst::Map], which maps from arbitr

Duncan 3 Jun 28, 2022
Turn your discord cache back to viewable images.

discache ?? Every time you view an Image in Discord, it gets saved in your cache folder as an unviewable file. Discache allows you to convert those fi

sam 2 Dec 14, 2022
Non-volatile, distributed file cache backed by content-addressed storage

blobnet A low-latency file server that responds to requests for chunks of file data. This acts as a non-volatile, over-the-network content cache. Inte

Modal Labs 12 Dec 31, 2022
Key-value cache RESP server with support for key expirations ⌛

BADER-DB (بادِر) Key-value cache RESP server with support for key expirations ⌛ Supported Features • Getting Started • Basic Usage • Cache Eviction •

Mahmoud Salem 7 Apr 21, 2023
Hitbox is an asynchronous caching framework supporting multiple backends and suitable for distributed and for single-machine applications.

Hitbox is an asynchronous caching framework supporting multiple backends and suitable for distributed and for single-machine applications.

null 62 Dec 27, 2022
Rust binary memcached implementation

bmemcached-rs Rust binary memcached implementation (ON GOING) Usage extern crate bmemcached; use std::sync::Arc; use std::thread; use bmemcached::Me

Jayson Reis 25 Jun 15, 2022
memcache client for rust

rust-memcache rust-memcache is a memcached client written in pure rust. Install The crate is called memcache and you can depend on it via cargo: [depe

An Long 104 Dec 23, 2022
This is a Rust implementation for popular caches (support no_std).

Caches This is a Rust implementation for popular caches (support no_std). See Introduction, Installation and Usages for more details. English | 简体中文 I

Al Liu 83 Dec 11, 2022
A Rust attribute macro that adds memoization to a function (rhymes with Mickey)

michie (sounds like Mickey) — an attribute macro that adds memoization to a function. Table of contents Features Non-features key_expr key_type store_

Mobus Operandi 16 Dec 20, 2022
Incremental computation through constrained memoization.

comemo Incremental computation through constrained memoization. [dependencies] comemo = "0.1" A memoized function caches its return values so that it

Typst 37 Dec 15, 2022