A pure Rust database implementation using an append-only B-Tree file format.

Overview

nebari

nebari forbids unsafe code nebari is considered experimental and unsupported crate version Live Build Status HTML Coverage Report for main branch Documentation for main branch

nebari - noun - the surface roots that flare out from the base of a bonsai tree

Warning: This crate is early in development. The format of the file is not considered stable yet. Do not use in production.

This crate provides the Roots type, which is the transactional storage layer for BonsaiDb. It is loosely inspired by Couchstore.

Features

Nebari exposes multiple levels of functionality. The lowest level functionality is the TreeFile. A TreeFile is a key-value store that uses an append-only file format for its implementation.

Using TreeFiles and a transaction log, Roots enables ACID-compliant, multi-tree transactions.

Each tree supports:

  • Key-value storage: Keys can be any arbitrary byte sequence up to 65,535 bytes long. For efficiency, keys should be kept to smaller lengths. Values can be up to 4 gigabytes (2^32 bytes) in size.
  • Flexible query options: Fetch records one key at a time, multiple keys at once, or ranges of keys.
  • Powerful multi-key operations: Internally, all functions that alter the data in a tree use TreeFile::modify() which allows operating on one or more keys and performing various operations.
  • Pluggable low-level modifications: The Vault trait allows you to bring your own encryption, compression, or other functionality to this format. Each independently-addressible chunk of data that is written to the file passes through the vault.
  • Optional full revision history. If you don't want to lose old revisions of data, you can use a VersionedTreeRoot to store information that will allow querying old revisions. Or, if you want to avoid the extra IO, use the UnversionedTreeRoot which only stores the information needed to retrieve the latest data in the file.
  • ACID-compliance:
    • Atomicity: Every operation on a TreeFile is done atomically. Operation::CompareSwap can be used to perform atomic operations that require evaluating the currently stored value.

    • Consistency: Atomic locking operations are used when publishing a new transaction state. This ensures that readers can never operate on a partially updated state.

    • Isolation: Currently, each tree can only be accessed exclusively within a transaction. This means that if two transactions request the same tree, one will execute and complete before the second is allowed access to the tree. This strategy could be modified in the future to allow for more flexibility.

    • Durability: The append-only file format is designed to only allow reading data that has been fully flushed to disk. Any writes that were interrupted will be truncated from the end of the file.

      Transaction IDs are recorded in the tree headers. When restoring from disk, the transaction IDs are verified with the transaction log. Because of the append-only format, if we encounter a transaction that wasn't recorded, we can continue scanning the file to recover the previous state. We do this until we find a successfluly commited transaction.

      This process is much simpler than most database implementations due to the simple guarantees that append-only formats provide.

Why use an append-only file format?

@ecton wasn't a database engineer before starting this project, and depending on your viewpoint may still not be considered a database engineer. Implementing ACID-compliance is not something that should be attempted lightly.

Creating ACID-compliance with append-only formats is much easier to achieve, however, as long as you can guarantee two things:

  • When opening a previously existing file, can you identify where the last valid write occurred?
  • When writing the file, do not report that a transaction has succeeded until the file is fully flushed to disk.

The B-Tree implementation in Nebari is designed to offer those exact guarantees.

The major downside of append-only formats is that deleted data isn't cleaned up until a maintenance process occurs: compaction. This process rewrites the files contents, skipping over entries that are no longer alive. This process can happen without blocking the file from being operated on, but it does introduce IO overhead during the operation.

Nebari will provide the APIs necessary to perform compaction, but may delegate scheduling and automation to users of the library and BonsaiDb.

Open-source Licenses

This project, like all projects from Khonsu Labs, are open-source. This repository is available under the MIT License or the Apache License 2.0.

Comments
  • Extremely slow insertion and huge file size

    Extremely slow insertion and huge file size

    Hi! I'm trying to use nebari storage in a very simple mode to keep a metrics data for a minute per key. my simple demo:

    pub fn nebari_test() -> Result<(), anyhow::Error> {
        let path = "db.nebari";
        std::fs::remove_file(path);
        let state = nebari::tree::State::default();
        let context = nebari::Context::default();
        let mut tree = nebari::tree::TreeFile::<nebari::tree::Unversioned, nebari::io::fs::StdFile>::write(path, state, &context, None)?;
        
        let value = [0u8; 0].to_vec();
        
        let mut keys_count = 0;
        let mut keys_size = 0;
        let mut values_size = 0;
        
        let now = std::time::Instant::now();
        for hour in 0..24 {
            for minute in 0..60 {
                for metric in 0..50 {
                    let metric = format!("metric_{metric}");
                    let key = format!("{metric}:h{hour}:m{minute}");
                    keys_size += key.as_bytes().len();
                    values_size += value.len();
                    tree.set(None, key, value.clone());
                    keys_count += 1;
                }
            }
        }
        tree.commit();
        println!("elapsed:        {}s", now.elapsed().as_secs());
        println!("keys_count:     {}", keys_count);
        println!("keys_size:      {}", keys_size);
        use std::os::unix::fs::MetadataExt;
        let meta = std::path::Path::new(path).metadata()?;
        println!("file size:      {}", meta.size());
        Ok(())
    }
    

    On my macbook arm64 M1 Pro, nebari = "0.3.1" i have this result:

    elapsed:        372s
    keys_count:     72_000
    keys_size:      1_167_600
    file size:      4_961_529_211
    

    After compacting the file size is ~2.8Мб, but compact consumes mut self, i can't compact tree every insert. I'm looking an embedded rust database for IoT devices with sdcard storage and 64Mb RAM. Nebari looks like a good solution, but why is it so slowly and the db file so huge? Could this be incorrect behavior?

    for example, this test on another databases:

    db          time       size
    nebari      372s       4_961_529_211 bytes, 2_886_779 bytes after compacting
    sled        17s       ~4_718_592 bytes
    rocksdb     0.577s    ~2_752_134 bytes
    jammdb      1.511s     8_912_896 bytes
    siamesedb   4.311s    ~140Мб
    
    opened by chertov 4
  • Add customizable index types

    Add customizable index types

    We should add the ability to add extra stats to the BTree nodes. This would allow BonsaiDb and other consumers to latch onto the internal map/reduce functionality. For BonsaiDb, it would allow for multi-level view reduction that follows the tree, rather than the single-level cache that is currently implemented.

    For this to work, Root will need to have ReducedIndex, and we will need to expose the serialization APIs.

    enhancement 
    opened by ecton 1
  • Handle open-files better

    Handle open-files better

    Currently, the StdFileManager will keep files open forever, except for after a compaction process happens. This works fine for small setups, but in BonsaiDb, we want to support the thousands of databases without any issues.

    We need to:

    • Investigate raising the open file limit automatically
    • Switch to an LRU cache for the open files in the StdFileManager, both readers and writers. Try to come up with a way to ensure that if we end up with many readers for a single file that we can expire individual readers in the cache.
    enhancement 
    opened by ecton 1
  • Implement tree compaction

    Implement tree compaction

    Add the ability to compact a database in-place:

    • Begin copying all live data from the existing file into a new database.
    • Once caught up, use file renames to atomically swap the new file in place.
    • Ensure that all existing readers/writers are closed so that new access will open the new file.

    Once done, update the benchmarks to have all get-like benchmarks show off fragmented and compacted performance.

    enhancement 
    opened by ecton 1
  • Refactor benchmarks

    Refactor benchmarks

    The violin charts start getting a bit useless when there are too many results to present, so I think we should split the groups from simply "gets" and "inserts" to:

    • get-[random/sequential]/[engine]/[dataset size]: single record lookup by primary key. The benchmark argument will be the dataset size.
    • get_multiple-[dataset size]-[random/sequential]/[engine]/[query size]: multi-record lookup by primary key. The benchmark argument will be the number of records requested in each iteration.
    • insert-[random/sequential]/[engine]/[transaction size]: Insert transactions with transaction size records over and over. Each iteration adds to the existing database, but the database should be cleared at the start of the benchmark group.

    Additionally, we should benchmark scan.

    enhancement 
    opened by ecton 1
  • Add a fuzzing test

    Add a fuzzing test

    Refs #50

    This implements our first fuzzer. It replicates the process that the bulik_compare_swap created, but in such a way that the fuzzer provides the batches. After a couple of hours across 8 cores, the fuzzer found a new bug. After fixing that bug (will be pushed in next commit to main), the fuzzer then ran for another 6 hours with no additional crashes. I'll be continuing to run it for the next day or two.

    I'm not closing #50 because another fuzzer should be written that executes a wide variety of operations in a sequential manner. This test focuses on batching because that's where I discovered a bug this weekend. It would be nice to have some fuzzing validation against the remaining operations -- and it can be less restrictive due to not needing to be batched.

    This commit also temporarily changes a few debug_asserts into asserts. This will be addressed in a follow up commit.

    opened by ecton 0
  • Add `self` support to Reducer

    Add `self` support to Reducer

    After optimizing count() in Bonsai after adding reduce() here, I looked at storing the view reductions in Nebari's indexes ( khonsulabs/bonsaidb#76). I realized that for BonsaiDb to be able to use a View's reduce function, the Reducer needs a self that can store the view.

    The problem is, where does the reducer get instantiated and stored? Or is it better to refactor the reducer to be the Root itself?

    enhancement 
    opened by ecton 0
  • Add an easy way to switch between MemoryFile/StdFile

    Add an easy way to switch between MemoryFile/StdFile

    Refs khonsulabs/bonsaidb#169

    This introduces the AnyFile and AnyFileManager types which allow selecting whether to use MemoryFiles or StdFiles at runtime.

    enhancement 
    opened by ecton 0
  • Implement a reduce API

    Implement a reduce API

    The last major feature for BonsaiDb that we need is to consume the new features implemented in #19 and #21 is the ability to compute a reduced value over a partial scan of a tree. Originally when implementing the underlying features, I envisioned "reduce" being a BonsaiDb views feature, not something that Nebari itself would offer. However, upon more thought, it definitely would be a great feature to have in this crate.

    The API should look something like this:

    impl TreeFile {
        fn reduce<R: RangeBounds<Buffer>>(&self, range: Buffer) -> Root::ReducedIndex {}
    }
    

    That signature is pseudo-code, and probably requires additional parameters.

    enhancement 
    opened by ecton 0
  • Add interior node scanning callback

    Add interior node scanning callback

    Scanning a tree should offer a callback for each interior node that presents the index and an option of whether to scan the node or not.

    This will enable BonsaiDb to query the "reduced" index values, and skip diving into the children nodes when a node is fully included.

    enhancement 
    opened by ecton 0
  • Add per-tree vault access to Roots

    Add per-tree vault access to Roots

    BonsaiDb will want to have some files that are unencrypted purely for speed. The Vault for Roots will likely need to be None, and while accessing trees, the TreeRoot will need a way to have its own Vault to override whatever was set on the Roots level.

    enhancement 
    opened by ecton 0
  • Replace Append-only format with Sediment

    Replace Append-only format with Sediment

    • [ ] transaction log caching needs to be replaced with LRUs
    • [ ] When removing or overwriting an existing value, the GrainId of the value needs to be freed.
    • [ ] Checkpointing needs to be updated to ensure readers will never have their "read" state freed. Currently Roots doesn't checkpoint at all, an TreeFile auto-checkpoints when not transactional.
    opened by ecton 0
  • Fuzz Testing

    Fuzz Testing

    Nebari should expand it's testing suite to include fuzz testing.

    Personally I don't have any experience with it, but some pointers here:

    help wanted 
    opened by daxpedda 0
  • Profile MemoryFile implementation

    Profile MemoryFile implementation

    From Discord, @rrichardson discovered that the MemoryFile implementation is incredibly slower than the real-file based implementation, at least when consumed from BonsaiDb -- on the order of magnitude of 10 minutes vs 6 seconds for a particular importing operation.

    bug 
    opened by ecton 0
  • Consider allowing TreeFile to use external blob storage

    Consider allowing TreeFile to use external blob storage

    Currently, TreeFiles store blobs/chunks in the same file that nodes are written to. When compacting a database, all of the blobs that are alive must be transferred to the new file.

    Over time, this is a lot of wasted IOPS if your application is never deleting data. In this day and age, a common way to operate is to "store everything" and only delete once it becomes a problem.

    The main idea of this issue is simple:

    • Change all of the tree file operations to use a new trait ChunkStorage to write non-node chunks. This may require adding a new parameter to each operation.
    • Allow specifying a ChunkStorage implementation when creating a TreeFile/Roots instance.
    • If no ChunkStorage is specified, chunks should be written in-line like they are today.
    • The ChunkStorage implementation can use 63 bits of information to note where the chunk is stored. The 64th bit will be used by Nebari to note that the chunk is stored externally.

    The hard part will be compaction. Nebari doesn't keep track of chunks. The way compaction works currently is data is copied when its referenced, otherwise its skipped. To achieve the goal of "not rewriting everything", the ChunkStorage implementation needs to receive enough information to be able to determine on its own how to compact itself, or opt not to. At this time, I'm not sure of a good way to solve this.

    More intelligent compaction can be achieved by using TreeFile to implement ChunkStorage. While this causes extra overhead, the TreeFile could return unique "chunk IDs" that are stable, but the actual location on disk can be moved around. This is where the idea of "tiered" storage comes in, as this TreeFile could do many things including:

    • Embed statistics about read frequency of each key, allowing compaction to group frequently used data closer together, or moving infrequently accessed keys to slower storage.
    • Subdivide storage into segments that can be defragmented independently.
    enhancement 
    opened by ecton 0
  • Implement Transaction Log v2

    Implement Transaction Log v2

    • #35
    • #40
    • Consider allowing larger transaction payloads
    • Add atomic upgrade
      • TransactionLog should expose a new error when the log format is the old format.
      • TransactionLog::repair should be added to rewrite transaction log to new file using new format, atomic swap to overwrite.
      • Add unit test upgrading an existing log file to the new format. Ensure testing one that contains out-of-order log entries.
    enhancement 
    opened by ecton 0
Releases(v0.5.4)
  • v0.5.4(May 29, 2022)

    Fixed

    • log::State::current_transaction_id() now behaves as documented. Previously, it was returning the last transaction ID that the log file had allocated, but the transaction ID returned may not have been committed. Now the ID returned is guaranteed to be the last ID written to the log.
    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(May 3, 2022)

    Fixed

    • File operations are now fully persisted to disk to the best ability provided by each operating system. @justinj discovered that no fsync() operations were happening, and reported the finding. Nebari's TreeFile was using File::flush() instead of File::sync_data()/sync_all(). This means that it would be possible for an OS-level buffer to not be flushed to disk before Nebari reported a successful write.

      Interestingly, this change has no noticable impact on performance on Linux. However, on Mac OS, File::sync_data() performs a fcntl with F_FULLFSYNC, which has a significant impact on performance. This is the correct behavior, however, as without this level of guarantee, sudden power loss could result in data loss.

      Many people argue that using F_BARRIERFSYNC is sufficient for most people, but Apple's own documentation states this is necessary:

      Only use F_FULLFSYNC when your app requires a strong expectation of data persistence. Note that F_FULLFSYNC represents a best-effort guarantee that iOS writes data to the disk, but data can still be lost in the case of sudden power loss.

      For now, the stance of Nebari's authors is that F_FULLFSYNC is the proper way to implement true ACID-compliance.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 2, 2022)

    Fixed

    • Another edge case similar to the one found in v0.5.1 was discovered through newly implemented fuzzer-based testing. When a node is fully absorbed to the bottom of the next, in some cases, the modification iterator would not back up to reconsider the node. When inserting a new key in this situation, if the new key was greater than the lowest key in the next node, the tree would get out of order.

      The exact circumstances of this bug are similarly as rare as described in v0.5.1's entry.

    Added

    • Feature paranoid enables extra sanity checks. This feature flag was added for purposes of fuzzing. It enables extra sanity checks in release builds that are always present in debug builds. These sanity checks are useful in catching bugs, but they represent that a database would be corrupted if the state was persisted to disk.

      These checks slow down modifications to the database significantly.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.1(Apr 30, 2022)

    Fixed

    • modify() operations on larger trees (> 50k entries) that performed multiple modification operations could trigger a debug_assert in debug builds, or worse, yield incorrect databases in release builds.

      The offending situations occur with edge cases surrounding "absorbing" nodes to rebalance trees as entries are deleted. This particular edge case only arose when the absorb phase moved entries in both directions and performed subsequent operations before the next save to disk occurred.

      This bug should only have been able to be experienced if you were using large modify() operations that did many deletions as well as insertions, and even then, only in certain circumstances.

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Mar 12, 2022)

    Breaking Changes

    • KeyEvaluation has been renamed to ScanEvaluation.

    • All scan() functions have been updated with the node_evaluatorcallback now returns aScanEvaluation instead of a bool. To preserve existing behavior, returnScanEvaluation::ReadDatainstead of true and ScanEvaluation::Stop instead of false.

      The new functionality unlocked with this change is that scan operations can now be directed as to whether to skip navigating into an interior node. The new reduce() function uses this ability to skip scanning nodes when an already reduced value is available on a node.

    Added

    • TreeFile::reduce(), Tree::reduce(), TransactionTree::reduce() have been added as a way to return aggregated information stored within the nodes. A practical use case is the ability to retrieve the number of alive/deleted keys over a given range, but this functionality extends to embedded indexes through the existing Reducer trait.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 1, 2022)

    Breaking Changes

    • get_multiple has been changed to accept an Iterator over borrowed byte slices.

    • ExecutingTransaction::tree now returns a LockedTransactionTree, which holds a shared reference to the transaction now. Previously tree() required an exclusive reference to the transaction, preventing consumers of Nebari from using multiple threads to process more complicated transactions.

      This API is paired by a new addition: ExecutingTransaction::unlocked_tree. This API returns an UnlockedTransactionTree which can be sent across thread boundaries safely. It offers a lock() function to return a LockedTransactionTree when the tread is ready to operate on the tree.

    • TransactionManager::push has been made private. This is a result of the previous breaking change. TransactionManager::new_transaction() is a new function that returns a ManagedTransaction. ManagedTransaction::commit() is the new way to commit a transaction in a transaction manager.

    Fixed

    • TransactionManager now enforces that transaction log entries are written sequentially. The ACID-compliance of Nebari was never broken when non-sequential log entries are written, but scanning the log file could fail to retrieve items as the scanning algorithm expects the file to be ordered sequentially.

    Added

    • ThreadPool::new(usize) allows creating a thread pool with a maximum number of threads set. ThreadPool::default() continues to use num_cpus::get to configure this value automatically.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Feb 23, 2022)

    Fixed

    • Fixed potential infinite loop when scanning for a transaction ID that does not exist.
    • Reading associated transaction log data now works when the data is larger than the page size. Previously, the data returned included the extra bytes that the transaction log inserts at page boundaries.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Feb 14, 2022)

    Changed

    • BorrowedRange now exposes its fields as public. Without this, there was no way to implement BorrowByteRange outside of this crate.
    • This crate now explicitly states its minimum supported Rust version (MSRV). The MSRV did not change as part of this update. It previously was not documented.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 9, 2022)

    Breaking Changes

    • ManagedFile has had its metadata functions moved to a new trait File which ManagedFile must be an implementor of. This allows dyn File to be used internally. As a result, PagedWriter no longer takes a file type generic parameter.
    • ManagedFile has had its functions open_for_read and open_for_append have been moved to a new trait, ManagedFileOpener.
    • FileManager::replace_with now takes the replacement file itself instead of the file's Path.
    • compare_and_swap has had the old parameter loosened to &[u8], avoiding an extra allocation.
    • TreeFile::push() has been renamed TreeFile::set() and now accepts any type that can convert to `ArcBytes<'static>.

    Added

    • AnyFileManager has been added to make it easy to select between memory or standard files at runtime.
    • Tree::first[_key](), TransactionTree::first[_key](), and TreeFile::first[_key]() have been added, pairing the functionality provided by last() and last_key().
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Feb 1, 2022)

    Fixed

    • Fixed a hypothetical locking deadlock if transactions for trees passed into State::new_transaction or Roots::new_transaction in a consistent order.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jan 27, 2022)

    Fixed

    • Removing a key in a versioned tree would cause subsequent scan() operations to fail if the key evaluator requested reading data from key that has no current data. A safeguard has been put in place to ensure that even if KeyEvaluation::ReadData is returned on an index that contains no position it will skip the operation rather than attempting to read data from the start of the file.

      Updating the crate should restore access to any "broken" files.

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 26, 2022)

    Breaking Changes

    • tree::State::read() now returns an Arc containing the state, rather than a read guard. This change has no noticable impact on microbenchmarks, but yields more fair writing performance under heavy-read conditions -- something the current microbenchmarks don't test but in-development Commerce Benchmark for BonsaiDb unvieled.
    • Buffer has been renamed to ArcBytes. This type has been extracted into its own crate, allowing it to be used in bonsaidb::core. The new crate is available here.
    • Root::scan, Tree::scan, Tree::get_range, TransactionTree::scan, and TransactionTree::get_range now take types that implement RangeBounds<&[u8]>. BorrowByteRange is a trait that can be used to help borrow ranges of owned data.

    Added

    • nebari::tree::U64Range has been exposed. This type makes it easier to work with ranges of u64s.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jan 6, 2022)

    Added

    • Tree::replace has been added, which calls through to TransactionTree::replace.
    • Tree::modify and TransactionTree::modify have been added, which execute a lower-level modification on the underlying tree.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Jan 4, 2022)

    This release signifies that we believe Nebari is stable enough that other projects could use it. While it's had a fair amount of internal testing while developing BonsaiDb, it should still be considered alpha and used with caution.

    Any breaking file format changes from this point forward will result in major version number increments.

    Source code(tar.gz)
    Source code(zip)
Owner
Khonsu Labs
Khonsu Labs
A lock-free, append-only atomic pool.

A lock-free, append-only atomic pool. This library implements an atomic, append-only collection of items, where individual items can be acquired and r

Jon Gjengset 64 Oct 24, 2022
A Distributed SQL Database - Building the Database in the Public to Learn Database Internals

Table of Contents Overview Usage TODO MVCC in entangleDB SQL Query Execution in entangleDB entangleDB Raft Consensus Engine What I am trying to build

Sarthak Dalabehera 38 Jan 2, 2024
The core innovation of EinsteinDB is its merge-append-commit protocol

The core innovation of EinsteinDB is its merge-append-commit protocol. The protocol is friendly and does not require more than two parties to agree on the order of events. It is also relativistic and can tolerate clock and network lag and drag.

EinsteinDB 1 Jan 23, 2022
Incomplete Redis client and server implementation using Tokio - for learning purposes only

mini-redis mini-redis is an incomplete, idiomatic implementation of a Redis client and server built with Tokio. The intent of this project is to provi

Tokio 2.3k Jan 4, 2023
AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust

AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust. It is designed as an experimental engine for the TiKV project, and will bring aggressive optimizations for TiKV specifically.

TiKV Project 535 Jan 9, 2023
Pure rust embeddable key-value store database.

MHdb is a pure Rust database implementation, based on dbm. See crate documentation. Changelog v1.0.3 Update Cargo.toml v1.0.2 Update Cargo.toml v1.0.1

Magnus Hirth 7 Dec 10, 2022
A mini kv database demo that using simplified bitcask storage model with rust implementation

A mini kv database demo that using simplified bitcask storage model with rust implementation.

Wancheng Long 17 Nov 28, 2022
Using embedded database modeled off SQLite - in Rust

Rust-SQLite (SQLRite) Rust-SQLite, aka SQLRite , is a simple embedded database modeled off SQLite, but developed with Rust. The goal is get a better u

Hand of Midas 3 May 19, 2023
The spatial message broker and database for real-time multiplayer experiences. Official Rust implementation.

WorldQL Server Rust implementation of WorldQL, the spatial message broker and database for real-time multiplayer experiences Setup Instructions ⚠️ Thi

null 214 Jan 2, 2023
An implementation of the tz database for the time-rs Rust crate.

time-tz An implementation of the tz database for the time-rs Rust crate. This implementation is based off of chrono-tz

null 12 Jul 27, 2022
Grsql is a great tool to allow you set up your remote sqlite database as service and CRUD(create/read/update/delete) it using gRPC.

Grsql is a great tool to allow you set up your remote sqlite database as service and CRUD (create/ read/ update/ delete) it using gRPC. Why Create Thi

Bruce Yuan 33 Dec 16, 2022
Pure Rust implementation of Arbitrum sequencer feed reader with built-in transaction decoding and MEV features

Sequencer-Client (WIP ?? ) Pure Rust implementation of Arbitrum sequencer feed reader with built-in transaction decoding and MEV features Design Goal

duoxehyon 11 Apr 23, 2023
Cassandra DB native client written in Rust language. Find 1.x versions on https://github.com/AlexPikalov/cdrs/tree/v.1.x Looking for an async version? - Check WIP https://github.com/AlexPikalov/cdrs-async

CDRS CDRS is looking for maintainers CDRS is Apache Cassandra driver written in pure Rust. ?? Looking for an async version? async-std https://github.c

Alex Pikalov 338 Jan 1, 2023
ForestDB - A Fast Key-Value Storage Engine Based on Hierarchical B+-Tree Trie

ForestDB is a key-value storage engine developed by Couchbase Caching and Storage Team, and its main index structure is built from Hierarchic

null 1.2k Dec 26, 2022
Plugin for macro-, mini-quad (quads) to save data in simple local storage using Web Storage API in WASM and local file on a native platforms.

quad-storage This is the crate to save data in persistent local storage in miniquad/macroquad environment. In WASM the data persists even if tab or br

ilya sheprut 9 Jan 4, 2023
fast & easy CLI and vscode extension specialized to format MySQL INSERT queries.

insertfmt fast & easy CLI specialized to format MySQL INSERT queries. format queries so that they look like a table. NOTE: If you wanna use the VSCode

canalun 7 May 2, 2023
A user crud written in Rust, designed to connect to a MySQL database with full integration test coverage.

SQLX User CRUD Purpose This application demonstrates the how to implement a common design for CRUDs in, potentially, a system of microservices. The de

null 78 Nov 27, 2022
Rust version of the Haskell ERD tool. Translates a plain text description of a relational database schema to dot files representing an entity relation diagram.

erd-rs Rust CLI tool for creating entity-relationship diagrams from plain text markup. Based on erd (uses the same input format and output rendering).

Dave Challis 32 Jul 25, 2022
A programmable document database inspired by CouchDB written in Rust

PliantDB PliantDB aims to be a Rust-written, ACID-compliant, document-database inspired by CouchDB. While it is inspired by CouchDB, this project will

Khonsu Labs 718 Dec 31, 2022