A programmable document database inspired by CouchDB written in Rust

Related tags

Database rust database
Overview

PliantDB

PliantDB is considered experimental and unsupported crate version Live Build Status HTML Coverage Report for main branch Documentation for main branch

PliantDB aims to be a Rust-written, ACID-compliant, document-database inspired by CouchDB. While it is inspired by CouchDB, this project will not aim to be compatible with existing CouchDB servers, and it will be implementing its own replication, clustering, and sharding strategies.

Project Goals

The high-level goals for this project are:

  • ☑️ Be able to build a document-based database's schema using Rust types.
  • ☑️ Run within your Rust binary, simplifying basic deployments.
  • ☑️ Run as a local-only file-based database with no networking involved.
  • ☑️ Run as a networked server using QUIC with TLS enabled by default
  • Easily set up read-replicas between multiple servers.
  • Easily run a highly-available quorum-based cluster across at least 3 servers
  • ☑️ Expose a Publish/Subscribe eventing system
  • Expose a Job queue and scheduling system -- a la Sidekiq or SQS

⚠️ Status of this project

You should not attempt to use this software in anything except for experiments. This project is under active development (GitHub commit activity), but at the point of writing this README, the project is too early to be used.

If you're interested in chatting about this project or potentially wanting to contribute, come chat with us on Discord: Discord.

Example

Check out ./pliantdb/examples for examples. To get an idea of how it works, this is a simple schema:

#[derive(Debug, Serialize, Deserialize)]
struct Shape {
    pub sides: u32,
}

impl Collection for Shape {
    fn collection_name() -> Result<CollectionName, InvalidNameError> {
        CollectionName::new("khonsulabs", "shapes")
    }

    fn define_views(schema: &mut Schematic) -> Result<(), Error> {
        schema.define_view(ShapesByNumberOfSides)
    }
}

#[derive(Debug)]
struct ShapesByNumberOfSides;

impl View for ShapesByNumberOfSides {
    type Collection = Shape;

    type Key = u32;

    type Value = usize;

    fn version(&self) -> u64 {
        1
    }

    fn name(&self) -> Result<Name, InvalidNameError> {
        Name::new("by-number-of-sides")
    }

    fn map(&self, document: &Document<'_>) -> MapResult<Self::Key, Self::Value> {
        let shape = document.contents::<Shape>()?;
        Ok(Some(document.emit_key_and_value(shape.sides, 1)))
    }

    fn reduce(
        &self,
        mappings: &[MappedValue<Self::Key, Self::Value>],
        _rereduce: bool,
    ) -> Result<Self::Value, view::Error> {
        Ok(mappings.iter().map(|m| m.value).sum())
    }
}

After you have your collection(s) defined, you can open up a database and insert documents:

    let db =
        Database::<Shape>::open_local("view-examples.pliantdb", &Configuration::default()).await?;

    // Insert a new document into the Shape collection.
    db.collection::<Shape>().push(&Shape::new(3)).await?;

And query data using the Map-Reduce-powered view:

let triangles = db
    .view::<ShapesByNumberOfSides>()
    .with_key(3)
    .query()
    .await?;
println!("Number of triangles: {}", triangles.len());

Why write another database?

  • Deploying highly-available databases is hard (and often expensive). It doesn't need to be.
  • We are passionate Rustaceans and are striving for an ideal of supporting a 100% Rust-based deployment ecosystem for newly written software.
  • Specifically for the founding author @ecton, the idea for this design dates back to thoughts of fun side-projects while running my last business which was built atop CouchDB. Working on this project is fulfilling a long-time desire of his.
let triangles = db
    .view::<ShapesByNumberOfSides>()
    .with_key(3)
    .query()
    .await?;
println!("Number of triangles: {}", triangles.len());

Feature Flags

No feature flags are enabled by default in the pliantdb crate. This is because in most Rust executables, you will only need a subset of the functionality. If you'd prefer to enable everything, you can use the full feature:

[dependencies]
pliantdb = { version = "*", default-features = false, features = "full" }
  • full: Enables local-full, server-full, and client-full.
  • cli: Enables the pliantdb executable.

Local databases only

[dependencies]
pliantdb = { version = "*", default-features = false, features = "local-full" }
  • local-full: Enables local, local-cli, local-keyvalue, and local-pubsub
  • local: Enables the local module, which re-exports the crate pliantdb-local.
  • local-cli: Enables the StructOpt structures for embedding database management commands into your own command-line interface.
  • local-pubsub: Enables PubSub for pliantdb-local.
  • local-keyvalue: Enables the key-value store for pliantdb-local.

PliantDB server

[dependencies]
pliantdb = { version = "*", default-features = false, features = "server-full" }
  • server-full: Enables server, server-websockets, server-keyvalue, and server-pubsub
  • server: Enables the server module, which re-exports the crate pliantdb-server.
  • server-websockets: Enables WebSocket support for pliantdb-server.
  • server-pubsub: Enables PubSub for pliantdb-server.
  • server-keyvalue: Enables the key-value store for pliantdb-server.

Client for accessing a PliantDB server

[dependencies]
pliantdb = { version = "*", default-features = false, features = "client-full" }
  • client-full: Enables client, client-trusted-dns, client-websockets, client-keyvalue, and client-pubsub
  • client: Enables the client module, which re-exports the crate pliantdb-client.
  • client-trusted-dns: Enables using trust-dns for DNS resolution. If not enabled, all DNS resolution is done with the OS's default name resolver.
  • client-websockets: Enables WebSocket support for pliantdb-client.
  • client-pubsub: Enables PubSub for pliantdb-client.
  • client-keyvalue: Enables the key-value store for pliantdb-client.

Backups

Exporting and restoring databases with direct access

If you have a local PliantDB database, you can use the local-backup command to save and load backups:

pliantdb local-backup <database-path> save
pliantdb local-backup <destination-database-path> load <backup-path>

The format of this export should be easy to work with if you're either transitioning from PliantDB to another solution or needing to do complicated disaster recovery work. It is described here.

Developing PliantDB

Pre-commit hook

Our CI processes require that some commands succeed without warnings or errors. To ensure that code you submit passes the basic checks, install the included pre-commit hook:

./git-pre-commit-hook.sh install

Once done, tools including cargo fmt, cargo doc, and cargo test will all be checked before git commit will execute.

Open-source Licenses

This project, like all projects from Khonsu Labs, are open-source. This repository is available under the MIT License or the Apache License 2.0.

Comments
  • Support deriving Collection

    Support deriving Collection

    I also added trybuild to test the error messages of derive macro.

    The macro does not check for name validity as the restrictions will be removed https://github.com/khonsulabs/bonsaidb/issues/143#issuecomment-1019278177.

    closes #138

    opened by ModProg 13
  • Add deriveable support for structs as View keys

    Add deriveable support for structs as View keys

    This request is a refinement of the idea from #65. View Keys need to be able to be compared on a byte-by-byte basis and resolve order, and the Key trait currently must be manually implemented for any non-integer types.

    This should be done as a custom derive. The composite format should be lightweight and focus on the problems that multi-field comparisons might unveil -- e.g., a String key is variable length, and a shorter String should sort lower than a longer string, regardless of the bytes that follow in the next field.

    The other thing a custom derive can do is offer the ability to support floating points. As far as I can come up with, there's not a reliable way to encode a floating point number such that it can be memcmp'ed. An attribute could be used to allow multiple strategies for utilizing a floating point key -- e.g., optionally multiply by 10^x then convert to a i64/u64.

    The macro needs to be able to have the location of bonsaidb_core specified as an optional attribute. By default, it should use ::bonsaidb::core as the path to bonsaidb_core, but a user should be able to override it based on their usage. E.g., #[view(core = bonsaidb_server::core)].

    enhancement help wanted collections 
    opened by ecton 10
  • Add `encryption_key` to Collection derive

    Add `encryption_key` to Collection derive

    Didn't think of it in #138, but the encryption_key function should also be derivable:

    #[derive(Collection)]
    #[collection(..., encryption_key = Some(KeyId::Master))]
    struct MyCollection {...}
    

    should implement the encryption key method in the generated Collection implementation:

    impl Collection for MyCollection {
        // ...
        fn encryption_key() -> Option<KeyId> {
            Some(KeyId::Master)
        }
    }
    

    It may also be nice to support allowing the encryption key to be None when it is disabled. Perhaps a second parameter allow_unencrypted_storage (feel free to suggest alternative names) which changes the implementation to:

    impl Collection for MyCollection {
        // ...
        fn encryption_key() -> Option<KeyId> {
            if ENCRYPTION_ENABLED {
                Some(KeyId::Master)
            } else {
                None
            }
        }
    }
    
    enhancement 
    opened by ecton 9
  • View generates inconsistent sort order

    View generates inconsistent sort order

    Just trying out bonsaidb for the first time. I've been reading about it and I can't wait to use it. Thanks so much for this amazing project

    However, my first attempt at using Views leads to some inconsistent behaviour.

    I've copied the BlogPost example into my own crate to test things out. But the thing is this query https://github.com/khonsulabs/bonsaidb/blob/main/book/book-examples/tests/view-example-string.rs#L93

    sometimes logs

    Retrieved post #7$1 "New version of BonsaiDb released"
    Retrieved post #7$2 "New Rust version released"
    Retrieved post #1 "New version of BonsaiDb released"
    Retrieved post #2 "New Rust version released"
    

    and others this:

    Retrieved post #7$2 "New Rust version released"
    Retrieved post #7$1 "New version of BonsaiDb released"
    Retrieved post #2 "New Rust version released"
    Retrieved post #1 "New version of BonsaiDb released"
    

    At first I thought it may be because the example doesn't use any specific sort, but then the docs say that ascending is the default. Indeed, I can see the query is build with forwards scanning.

    Adding ascending or descending to the view have the same erratic results

    I don't really have a way to reproduce this, except running the test many times

    documentation collections 
    opened by cortopy 8
  • Add derive macro for View

    Add derive macro for View

    • ~~Blocked by #99~~

    Once we split View implementations from their metadata, View could be derived in a similar fashion as #138.

    The macro needs to be able to have the location of bonsaidb_core specified as an optional attribute. By default, it should use ::bonsaidb::core as the path to bonsaidb_core, but a user should be able to override it based on their usage. E.g., #[view(core = bonsaidb_server::core)].

    enhancement collections 
    opened by ecton 8
  • Add derive macro for Collection

    Add derive macro for Collection

    Collection currently has a lot of boilerplate code that could be reduced or eliminated by a derive macro. We should consider adding macros to help implement collections:

    • Collection Authority and Name can be set as parameters on the struct
    • DefaultSerialization can be auto-derived, or overridden with a parameter.
    • Views can be specified as a list of type names (e.g. [ByName, ...]).
    • We should consider #69 and determine if there is any way to support deriving Collection while still offering the lifecycle event implementations easily.

    Originally posted by @ModProg in https://github.com/khonsulabs/bonsaidb/issues/106#issuecomment-1017519606

    Maybe other things could also be derived like DefaultSerialization.

    Maybe also Collection, but that would only be sensible for once without views etc. but at least for my use that would be really comftable :D.

    #[derive(Collection)]
    #[collection_name = ["hoppi", "notification_subscription"]]
    // or
    #[collection_name = "hoppi.notification_subscription"]
    

    If you want I would look into implementing the macro.

    I have this a lot:

    impl Collection for SubscriptionInfoTable {
        fn collection_name() -> Result<CollectionName, InvalidNameError> {
            CollectionName::new("hoppi", "notification_subscription")
        }
    
        fn define_views(_schema: &mut Schematic) -> Result<(), bonsaidb::core::Error> {
            Ok(())
        }
    }
    impl DefaultSerialization for SubscriptionInfoTable {}
    
    enhancement good first issue collections 
    opened by ecton 8
  • View is never computed

    View is never computed

    Hi! I found another thing, this time a bug:

    I am starting an example with an empty database and regularly poll a view on my collection. Initially it is empty of course, so no need to compute the view.

    However, when I add just a single entry, it stochastically sometimes updates the view, sometimes it doesn't (it doesn't call the map function). This is not a problem when adding many entries, because it updates often enough then. But when adding just a single entry, this makes the view be empty forever, even if there should be an entry (again, map is never called, despite an entry being added to the collection).

    I hope this was understandable, please tell me whether you need any more information.

    Thank you again! :)

    bug collections storage 
    opened by FlixCoder 7
  • Switch to available_parallelism for auto-configuration

    Switch to available_parallelism for auto-configuration

    Via @daxpedda in Discord, a new api has been stabilized in Rust 1.59 that allows querying the "available parallelism":

    https://doc.rust-lang.org/stable/std/thread/fn.available_parallelism.html

    We could replace the sysinfo dependency with this new API for querying the CPU count. We have to update the MSRV to use this.

    enhancement 
    opened by ecton 6
  • File storage format questions

    File storage format questions

    What is the format when the db is stored as a file ? (I couldn't find documentation on it). Is the format fixed or would there be a possibility for a human readable format ? (json, toml, text, anything).

    opened by happysalada 3
  • Add Derive for Api

    Add Derive for Api

    The Api trait should be deriveable:

    struct MyType;
    
    impl Api for MyType {
      type Response = OtherType;
      type Error = bonsaidb::core::api::Infallible;
    
      fn name() -> ApiName {
        ApiName::new("authority", "name")
      }
    }
    

    Should be able to be derived as:

    #[derive(Api)]
    #[api(authority = "authority", name = "name", response = OtherType, error = Infallible]
    struct MyType;
    

    Response is required. Error should be optional. If not specified, it should use bonsaidb_core::api::Infallible. Authority can be optional the same way the other derives work.

    @ModProg

    enhancement 
    opened by ecton 3
  • Implement Basic App Platform

    Implement Basic App Platform

    The goal of this issue is to create a basic app platform that allows defining restful API endpoints, simple HTTP serving, a custom over-the-wire (WebSocket/PliantDb) API. A developer could use this platform to create an app that uses PliantDb to manage users and permissions, and optionally an HTTP layer with WebSocket support serving requests. The HTTP layer's main purposes are writing RESTful APIs and serving a single-page app that is powered by WASM + PliantDb over WebSockets.

    • [x] Implement custom api support (#54)
    • [x] Add Users and Permissions management
      • [x] Ability to define permission groups
      • [x] Ability to define roles (one or more groups)
      • [x] Ability to create users (username + password minimum)
      • [x] Ability to assign users to roles
    • [x] Add connected-client tracking, a-la basws:
      • [x] Upon connecting, initiate authentication
      • [x] If no auth, proceed with default permissions
        • [x] If not permitted to connect, disconnect.
      • [x] If authed, load user profile and effective permissions (needs ~khonsulabs/actionable#2~).
      • [x] Call Backend method notifying of either anonymous or logged-in connection.
    • [x] Implement HTTP layer
      • [ ] Simple routing -> handler mapping. Maybe doable with actionable + another trait?
      • [ ] Handlers convert from HttpRequest to HttpResponse
      • [x] Integrate WebSockets into routing
      • [x] Add TLS support
      • [x] Add static-file handler(s)
      • [ ] Add Single-Page-App server
    enhancement blocked 
    opened by ecton 3
  • Allow configuring cache sizes

    Allow configuring cache sizes

    Currently, the ChunkCache used by Nebari is hardcoded. We should add parameters to the StorageConfiguration to allow customizing the cache parameters.

    enhancement storage 
    opened by ecton 0
  • Project Status

    Project Status

    Hey, just stumbled across BonsaiDB and it looks really neat! The last commit was in August so I just wanted to check before I get too far into it, is this project is still being maintained?

    opened by D1plo1d 8
  • Add Bulk Key/KeyValue iteration/retrieval

    Add Bulk Key/KeyValue iteration/retrieval

    Currently there is no way to list all of the keys in the Key-Value store. The internal backup functionality uses internal APIs to perform this operation.

    At a minimum we should have:

    • A way to retrieve all keys
    • A way to retrieve all keys and values
    • A way to retrieve ranges of keys
    • A way to retrieve ranges of keys and values

    We should consider how retrieval should work with namespaced keys -- maybe it's an option whether to include child namespaces or not.

    key-value 
    opened by ecton 0
  • ACME renewal logic is wrong

    ACME renewal logic is wrong

    I received an alert from LetsEncrypt one of the servers that's running ACME issued through BonsiaDb hasn't renewed its certificate. Last night, I ssh'ed in and updated the server and rebooted it, figuring that would trigger the renewal logic for sure.

    This morning, I noticed it still hadn't updated. I tracked it down to the use of a helper function from the acme library. I assumed it did something a little different than what it does -- it returns half the duration remaining on the certificate.

    The way the loop is currently written, the err_cnt parameter is never incremented, which means this function will always return half the remaining time on the certificate -- not the duration until the halfway point of the certificate's lifetime. That's where my misunderstanding came from.

    We should move the renewal logic from being completely stateless to tracking the next renewal timestamp on the TlsCertificate entity. We should keep track of the renewal attempts as well, to help with debugging certificate issuance problems.

    bug networking 
    opened by ecton 1
  • Document supported hardware architectures

    Document supported hardware architectures

    Hi!

    Could you please put the information about supported architectures to the documentation please? E.e. about supported architectures for different operating systems, some specific requirements to the supported instructions, if you have any (e.g. maybe AVX is required - I do not know).

    This kind of information is important for the end-users.

    Thanks in advance!

    documentation 
    opened by zamazan4ik 2
  • Installing with `cargo install bonsaidb --features cli` fails

    Installing with `cargo install bonsaidb --features cli` fails

    Error:

    error[E0433]: failed to resolve: use of undeclared crate or module `bonsaidb_client`
     --> /home/ludwig/.cargo/registry/src/github.com-1ecc6299db9ec823/bonsaidb-0.4.1/src/cli.rs:5:5
      |
    5 | use bonsaidb_client::{fabruic::Certificate, Client};
      |     ^^^^^^^^^^^^^^^ use of undeclared crate or module `bonsaidb_client`
      |
    help: there is a crate or module with a similar name
      |
    5 | use bonsaidb_core::{fabruic::Certificate, Client};
      |     ~~~~~~~~~~~~~
    
    error[E0432]: unresolved import `bonsaidb_client`
     --> /home/ludwig/.cargo/registry/src/github.com-1ecc6299db9ec823/bonsaidb-0.4.1/src/cli.rs:5:5
      |
    5 | use bonsaidb_client::{fabruic::Certificate, Client};
      |     ^^^^^^^^^^^^^^^ use of undeclared crate or module `bonsaidb_client`
      |
    help: there is a crate or module with a similar name
      |
    5 | use bonsaidb_core::{fabruic::Certificate, Client};
      |     ~~~~~~~~~~~~~
    
    error[E0432]: unresolved import `crate::AnyServerConnection`
      --> /home/ludwig/.cargo/registry/src/github.com-1ecc6299db9ec823/bonsaidb-0.4.1/src/cli.rs:11:5
       |
    11 | use crate::AnyServerConnection;
       |     ^^^^^^^^^^^^^^^^^^^^^^^^^^ no `AnyServerConnection` in the root
    
    Some errors have detailed explanations: E0432, E0433.
    For more information about an error, try `rustc --explain E0432`.
    error: could not compile `bonsaidb` due to 3 previous errors
    warning: build failed, waiting for other jobs to finish...
    error: failed to compile `bonsaidb v0.4.1`, intermediate artifacts can be found at `/tmp/cargo-installBU7lOT`
    
    opened by Aloso 1
Releases(v0.4.1)
  • v0.4.1(Apr 5, 2022)

    All Commits since last release: https://github.com/khonsulabs/bonsaidb/compare/v0.4.0...v0.4.1

    This release is a minor release with an important bug fix for the view indexing system.

    Fixed

    • The View indexing system had a bug when deleting the last view entries for a key while also inserting new entries for that key in the same mapping update operation. This prevented the recording of new entries being made during that mapping operation. This bug was introduced during the optimizations in v0.3.0.

      All views will be reindexed automatically on upgrade.

    • insert_bytes/push_bytes no longer require SerializedCollection to be implemented.

    Known Issues

    • #240: Tuples used as keys do not sort correctly. The format for tuples will be changing in v0.5.0. Compatibility types will be provided to make it easy to preserve existing functionality. The [linked issue][#240] describes the scenario that does not work correctly currently.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 29, 2022)

    Blog Post: https://bonsaidb.io/blog/bonsaidb-v0-4-0/ All commits since last release: https://github.com/khonsulabs/bonsaidb/compare/v0.3.0...v0.4.0

    Breaking Changes

    The blog post has a section discussing updating existing projects. If you have any issues upgrading, don't hesitate to open an issue, a discussion, or reach out on Discord.

    • BonsaiDb now has both an async interface as well as a blocking interface. This has caused significant changes, but they can be summarized simply:

      • Connection-related async-compatible traits have had the Async prefix added to them.

        | Blocking | Async | |--------------------|-------------------------| | Connection | AsyncConnection | | StorageConnection | AsyncStorageConnection | | PubSub | AsyncPubSub | | Subscriber | AsyncSubscriber | | KeyValue | AsyncKeyValue | | LowLevelConnection | AsyncLowLevelConnection |

      • Functions that take parameters of the above traits now are offered in pairs: a blocking function and an async function with "_async" at the end of the name. For example, SerializedCollection::get() is the blocking version of SerializedCollection::get_async().

      • For bonsaidb-local, the Database and Storage types are now blocking implementations. Under the hood, BonsaiDb previously used tokio::task::spawn_blocking() to wrap calls to the database in an async API. New types, AsyncDatabase and AsyncStorage have been added that provide the previous behavior. The types can be converted between each other using helpers as/into/to_blocking/async available on each type.

        These changes allow bonsaidb-local to be compiled without Tokio. To enable tokio, enable feature async if using bonsaidb-local directly, or enable feature local-async when using the bonsaidb crate.

      • For bonsaidb-server, it still uses networking driven by Tokio. Server/CustomServer implement AsyncStorageConnection, and Server can convert to Storage via the From trait for synchronous database access.

      • For bonsaidb-client, Client implements both AsyncStorageConnection and StorageConnection and is safe for use in both synchronous and asynchronous contexts. In WASM, Client only implements AsyncStorageConnection. For all other platforms, the Client builder supports specifying the Tokio runtime handle if needed. Otherwise, the current runtime will be used or a default runtime will be created automatically if unavailable.

    • Connection::query_with_docs/Connection::query_with_connection_docs now verify the user has permission to DocumentAction::Get. This allows schema authors to allow querying views without allowing the documents themselves to be fetched.

    • ViewAction::DeleteDocs has been removed. Delete docs is now composed of two permissions checks: ViewAction::Query to retrieve the list of documents to delete, and DocumentAction::Delete for each document retrieved. This ensures if permission is denied to delete a specific document, it still cannot be deleted through delete_docs().

    • All APIs have had their limit parameters changed from usize to u32. Since usize is platform-dependent, picking a fixed-width type is more appropriate.

    • CustomApi has been renamed to Api and changed significantly.

      On the Server, Apis are registered on the ServerConfiguration. The Api implementor is treated as the "request" type and is what Clients send to the Server. The Api::Response type is what the Server sends to the Client. Out-of-band responses can still be delivered.

      On the Client, Apis can simply be used without any extra steps. If you expect out-of-band responses, callbacks can be registered when building the client.

      Internally, all BonsaiDb APIs have been refactored to use this -- there is no distinction.

    • The multiuser feature flag has been removed. In the end this caused a lot of needless conditional compilation for removing a single lightweight dependency.

    • User::assume_identity and Role::assume_identity allow assuming the identity of a user or role by their unique ID or name. The connection must be permitted with the newly added ServerAction::AssumeIdentity for the appropriate resource name (user_resource_name or role_resource_name).

    • StorageConnection::authenticate and StorageConnection::assume_identity both return a new instance with the new authentication. This enables authenticating as multiple roles with the same underlying storage connection.

      StorageConnection::session() is a new function that returns the current Session, if one exists. This new type contains information about any currently authenticated identity, the unique id of the session, and the current effective permissions.

      This release note applies equally to AsyncStorageConnection.

    • LowLevelConnection and AsyncLowLevelConnection have been added to group functionality that is not generally meant for the average user to utilize. The methods that were documented as low-level in Connection/AsyncConnection have been moved to this trait. Additionally, new methods that allow performing operations without the generic types have been added to this trait. This functionality is what will be useful in providing applications that can interact with BonsaiDb without having the Schema definitions available.

    • PubSub/AsyncPubSub now allows any Serialize implementation to be used as the topic parameter. New methods publish_bytes and publish_bytes_to_all have been added enabling publishing raw payloads.

    • CollectionName/SchemaName have had common methods extracted to a trait, Qualified. This was part of a refactoring to share code between these two types and the newly introduced ApiName type.

    • BackupLocation and VaultKeyStorage have been changed to blocking traits. bonsaidb-keystorage-s3 wraps a tokio Runtime handle as the AWS SDK requires Tokio.

    • ServerConfiguration now takes a Backend generic parameter, which must match the CustomServer being created. In general the Rust compiler should be able to infer this type based on usage, and therefore shouldn't be a breaking change to many people.

    • The Backend trait now has an associated Error type, which allows for custom error types to be used. When an error occurs during initialization, it is returned. Currently, errors that are returned during client connection handling are printed using log::error and ignored.

    • Key has had its encoding functionality moved to a new trait, KeyEncoding. KeyEncoding has been implemented for borrowed representations of Key types.

      This change allows all view query and collection access to utilize borrowed versions of their key types. For example, if a View's Key type is String, it is now possible to query the view using an &str parameter.

    Added

    • Range::default() now returns an unbounded range, and Bound::default() returns Bound::Unbounded.

    • Range now has several builder-pattern style methods to help construct ranges. In general, users should simply use the built-in range operators (.., start.., start..end, start..=end), as they are able to represent nearly every range pattern. The built-in range operators do not support specifying an excluded start bound, while the new method Range::after allows setting an excluded start bound.

    • #215: StorageConnection::delete_user() is a new function that allows deleting a user by name or id. Deleting a user is permitted with the ServerAction::DeleteUser action.

    • bonsaidb_core::key::encode_composite_field and bonsaidb_core::key::decode_composite_field have been added which allow building more complex Key implementations that are composed of multiple fields. These functions are what the Key implementation for tuples is powered by.

    • Key is now implemented for [u8; N].

    • #221: headers() has been as a function to all collection list builders, enabling querying just the headers of a document.

    • Transaction now has apply() and apply_async(), which the higher-level API to LowLevelConnection::apply_transaction.

    • ArgonConfiguration can now be specified when building StorageConfiguration/ServerConfiguration using Builder::argon.

    • SystemTime and Duration now have Key implementations.

    • bonsaidb_core::key::time has been added which contains a wide array of types that enable storing timestamps and durations with limited resolution, powered by variable integer encoding to reduce the number of bytes needed to encode the values.

      These types are powered by two traits: TimeResolution and TimeEpoch. Using these traits, the LimitedResolutionDuration and LimitedResolutionTimestamp types can be used for completely custom resolutions (e.g., 15 minutes) and epochs (the base moment in time to store the limited resolution relative to).

      By constraining the resolution and using an epoch that is closer to the average timestamp being stored, we can reduce the number of bytes required to represent the values from 12 bytes to significantly fewer.

      These type aliases have been added in these three categories:

      • Durations: Weeks, Days, Hours, Minutes, Seconds, Milliseconds, Microseconds, and Nanoseconds.
      • Timestamps relative to the Unix Epoch (Jan 1, 1970 00:00:00 UTC): WeeksSinceUnixEpoch, DaysSinceUnixEpoch, ...
      • Timestamps relative to the Bonsai Epoch (Mar 20, 2031 04:31:47 UTC): TimestampAsWeeks, TimestampAsDays, ...
    • Backend::configure() is a new function that allows a Backend to set configuration options on ServerConfiguration before the server is initialized. This is a good location for Backends to define their Apis and Schemas.

    Changed

    • Counting a list of documents now uses reduce() in Nebari, a new feature that allows aggregating the embedded statistics without traversing the entire tree. The net result is that retrieving a Collection's count should be near instant and returning the count of a range of keys should be very fast as well.
    • StorageConnection::create_database/AsyncStorageConnection::create_database now returns the newly created database.

    Fixed

    • Defining multiple views with the same name for the same collection will now return an error.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Mar 3, 2022)

    Blog Post: https://bonsaidb.io/blog/bonsaidb-v0-3-0

    This release welcomes @vbmade2000 as a contributor to BonsaiDb. Thank you!

    Breaking Changes

    • bonsaidb::local::jobs is now private. It used to be a separate, public crate in the PliantDb days. After thinking about the job scheduler more, this initial implementation is better suited for the internal task management than the higher-level jobs system. As such, it has been internalized.

    • bonsaidb::core::transaction::Changes::Documents has been changed to store the CollectionNames separately from the ChangedDocuments. This makes the transaction log entries smaller, as collection names aren't copied for each document.

      The storage layer is fully backwards compatible and will automatically convert existing transactions to the new format.

    Fixed

    • Listing executed transactions that were written in v0.1 was broken in v0.2. Backwards compatibility is now automatically tested to help ensure this sort of issue won't happen in the future again.

    Added

    • SerializedCollection::list_with_prefix, connection::Collection::list_with_prefix, and connection::View::with_key_prefix have been added as an easy way to filter results based on whether the key starts with the given prefix.

      This is supported by a new trait, IntoPrefixRange. This trait has been implemented for all byte-based key implementations as well as for String.

    • Operation::push_serialized has been added, which calls natural_id before creating an Operation::Insert variant.

    • Tasks::parallelization and Builder::workers_parallelization have been added as a way to control how many threads can be used by any given task/worker. This is automatically configured to be the number of cpu cores detected.

    • count() is a new function on the list builders, available via:

      • SerializedCollection::all(db).count().await
      • SerializedCollection::list(42.., db).count().await
      • db.collection::<Collection>().all().count().await
      • db.collection::<Collection>().list(42..).count().await

      The performance of this call is not as good as it will eventually be, as it is currently doing more work than strictly necessary.

    • #215: StorageConnection::delete_user has been added, enabling deletions of users without directly interacting with the admin database.

    Changed

    • The view map/reduce system has been optimized to take advantage of some parallelism. The view system is still not hightly optimized, but this change makes a significant improvement on performance.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Feb 18, 2022)

    Blog post: https://bonsaidb.io/blog/bonsaidb-v0-2-0

    Breaking Changes

    • bonsaidb::core::Error::DocumentConflict now contains a Header instead of just the document's ID. This allows an application to re-submit an update with the updated header without another request to the database.

    • StorageConfiguratation::vault_key_storage now uses an Arc instead of a Box. This change allows StorageConfiguration and ServerConfiguration to implement Clone.

    • Document::create_new_revision has been removed. It was meant to be an internal function.

    • Document now requires AsRef<Header> and AsMut<Header> instead of Deref<Header>/DerefMut. The publicly visible change is that the shortcut of accessing document.header.emit_* through deref by using document.emit_* will no longer work. This impacts CollectionDocument, OwnedDocument, and BorrowedDocument.

      This removes a little magic, but in some code flows, it was impossible to use Deref anyways due to Deref borrowing the entire document, not just the header.

    • Collection::PrimaryKey is a new associated type that allows customizing the type that uniquely identifies documents inside of a Collection. Users of the derive macro will be unaffected by this change. If you're upgrading existing collections and wish to maintain backwards compatibility, use u64 as the type.

      A natural_id() function can now be implemented on SerializedCollection or DefaultSerialization which allows extracting a primary key value from a new document being pushed

      A new example, primary-keys.rs, as been added showing basic usage of changing the primary key type. This change resulted in a sequnce of breaking changes that will be listed independently.

    • Key has been moved from bonsaidb::core::schema::view to bonsaidb::core::key.

    • Key::as/from_big_endian_bytes have been renamed to Key::as/from_ord_bytes.

    • Key::first_value() and Key::next_value() are new provided functions. By default, these functions return NextValueError::Unsupported.

      Key::first_value() allows a Key type to define the first value in its sequence. For example, 0_u64 is the result of u64::first_value().

      Key::next_value() allows a Key type to find the next value in sequence from the current value. Implementors should never wrap, and should instead return NextValueError::WouldWrap.

      Sensible defaults have been implemented for all numeric types.

    • Connection and its related types have had all previously hard-coded primary keys pf u64 changed to generic parameters that can accept either a DocumentId or Collection::PrimaryKey. The affected methods are:

      • Connection::insert
      • Connection::overwrite
      • Connection::get
      • Connection::get_multiple
      • Connection::list
      • connection::Collection::insert
      • connection::Collection::overwrite
      • connection::Collection::get
      • connection::Collection::get_multiple
      • connection::Collection::list
      • SerializedCollection::insert
      • SerializedCollection::insert_into
      • SerializedCollection::overwrite
      • SerializedCollection::overwrite_into
      • SerializedCollection::get
      • SerializedCollection::get_multiple
      • SerializedCollection::list
      • transaction::Operation::insert_serialized
      • transaction::Operation::overwrite_serialized
    • Header::id has changed from u64 to DocumentId, and CollectionHeader<PrimaryKey> has been added which contains Collection::PrimaryKey deserialized.

      These previous usages of Header have been changed to CollectionHeader:

      • Connection::insert result type
      • Connection::overwrite result type
      • connection::Collection::insert result type
      • connection::Collection::insert_bytes result type
      • connection::Collection::push result type
      • connection::Collection::push_bytes result type
      • CollectionDocument::header

      The Header::emit* functions have been moved to a new trait, Emit. This trait is implemented by both Header and CollectionHeader. The functions moved are:

      • emit()
      • emit_key()
      • emit_value()
      • emit_key_and_value()

      These functions now return a Result, as encoding a primary key value can fail if it is larger than DocumentId::MAX_LENGTH.

    • HasHeader is a new trait that allows accessing a Header generically from many types. This type is used in Connection::delete and connection::Collection::delete.

    • Types and functions that used u64 as a document ID have been replaced with DocumentIds. The number of locations are too many to list. If you need to convert from a u64 to a DocumentId, you can use DocumentId::from_u64().

    • Document::contents and Document::set_contents are now ore "painful" to access due to the generic parameter added to Document. SerializedCollection::document_contents(doc) and SerializedCollection::set_document_contents(doc, new_contents) have been provided as easier ways to invoke the same functionality. For example:

      let contents = doc.contents::<MyCollection>()?;
      

      Becomes:

      let contents = MyCollection::document_contents(&doc)?;
      
    • Backups made prior to 0.2.0 will not be able to be restored with this updated version. The document IDs are encoded differently than in prior versions.

    Added

    • Optional compression is now available, using the LZ4 algorithm. StorageConfiguration::default_compression controls the setting. When using the bonsaidb crate, the feature can be made available using either local-compression or server-compression. When using bonsaidb-server or bonsaidb-local directly, the feature name is compression.

      This compression is currently applied on all chunks of data written to BonsaiDb that are larger than a hardcoded threshold. This includes the Key-Value store. The threshold may be configurable in the future.

      Some of the benchmark suite has been expanded to include comparisons between local storage with and without compression.

    • Added ability to "overwrite" documents without checking the stored revision information. Because this removes a layer of safety, it has its own permissible action: DocumentAction::Overwrite. The functions that have been added are:

      • connection::Connection::overwrite
      • connection::Collection::overwrite
      • schema::SerializedCollection::overwrite
      • schema::SerializedCollection::overwrite_into
      • document::CollectionDocument::overwrite
      • transaction::Transaction::overwrite
      • transaction::Operation::overwrite

    Changed

    • Internal dependencies between crates are now pinned based on their needs. This means that bonsaidb-sever will require a matching verison of bonsaidb-local when compiling. A simple example of a change that is a breaking compilation change but is not breaking from a compatibility standpoint is a change to a structure where #[serde(rename)] is used to remap an old value to a new value.

      The only crate currently not pinning its dependencies is bonsaidb-keystorage-s3. This crate, and hopefully many crates to come, are only tying themselves to the public API of BonsaiDb.

      This may generate slightly more crate updates than absolutely necessary, but for a small team it seems like the most manageable approach.

    Fixed

    • The view system now tracks an internal version number in addition to the version specified in the view definiton. This allows internal structures to be upgraded transparently.
    • Applying a transaction to a collection with a unique view now ensures a view mapping job has finished if the view's integrity check spawns one before allowing the transaction to begin.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Feb 12, 2022)

  • v0.1.1(Feb 11, 2022)

    Thanks goes to @ModProg for his contributions on the new derive macros!

    This release contains no breaking changes.

    Added

    • SchemaName:private and CollectionName::private are two new constructors that allow defining names without specifying an authority. Developers creating reusable collections and/or schemas should not use these methods as namespacing is meant to help prevent name collisions.

    • connection::Collection::all() and SchemaCollection::all() have been implemented as simple wrappers around list(..).

    • #146, #187: The Schema, Collection, and View traits can now be derived rather than manually implemented:

      #[derive(Debug, Schema)]
      #[schema(name = "my-schema", collections = [Shape])]
      struct MySchema;
      
      #[derive(Debug, Serialize, Deserialize, Collection)]
      #[collection(name = "shapes", views = [ShapesByNumberOfSides])]
      struct Shape {
          pub sides: u32,
      }
      
      #[derive(Debug, Clone, View)]
      #[view(collection = Shape, key = u32, value = usize, name = "by-number-of-sides")]
      struct ShapesByNumberOfSides;
      
    • #169: Memory-only instances of Storage can be created now. This is primarily intended for testing purposes.

    Changed

    • Inline examples have been added for every connection::Collection and connection::View function.
    • All examples have been updated to the new derive macro syntax. Additionally, documentation and examples for deriving Schema, Collection, and View have been added to the respective traits.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(Feb 5, 2022)

    v0.1.0

    Announcement Blog Post: https://bonsaidb.io/blog/announcing-bonsaidb-alpha Closed Issues: https://github.com/khonsulabs/bonsaidb/milestone/2?closed=1 All Commits: https://github.com/khonsulabs/bonsaidb/compare/v0.1.0-dev.4...v0.1.0

    Added

    • bonsaidb::local::admin now exposes collections that are used to manage BonsaiDb.
    • Ability to add users, set a user's password, and log in as a user.
    • Each bonsaidb::local::Storage now has a unique ID. It will be randomly generated upon launch. If for some reason a random value isn't desired, it can be overridden in the Configuration.
    • Centralized secrets vault: Enables limited at-rest encryption. See bonsaidb::core::vault for more information.
    • For serializable types, Collection now defines easier methods for dealing with documents. NamedCollection allows collections that expose a unique name view to have easy ways to convert between IDs and names.
    • Server Backend trait now defines connection lifecycle functions that can be overridden to customize behavior when clients connect, disconnect, or authenticate.
    • Client now has a build() method to allow for customizing connections.
    • CustomApi responses can now be sent by the server via ConnectedClient::send(). The client can now register a callback to receive out-of-band responses.
    • Backend has a new associated type, CustomApiDispatcher which replaces Server::set_custom_api_dispatcher. This change allows custom api dispatchers to receive the server or client during initialization if needed. For example, if a custom API needs to know the caller's identity, you can store the client in your dispatcher and access it in your handlers.
    • Backend has a new associated type: ClientData. This associated type can be used to associate data on a per-ConnectedClient basis.
    • ServerConfiguration has a new setting: client_simultaneous_request_limit. It controls the amount of query pipelining a single connection can achieve. Submitting more queries on a single connection will block reading additional requests from the network connection until responses have been sent.
    • ServerConfiguration now supports authenticated_permissions, allowing a set of permissions to be defined that are applied to any user that has successfully authenticated.
    • CollectionView is a new trait that can be implemented instead of View. The map() function takes a CollectionDocument parameter that is already deserialized for you.
    • bonsaidb_core now has two new macros to ease some tedium of writing simple views: define_basic_mapped_view and define_basic_unique_mapped_view.
    • BonsaiDb now uses log internally to report errors that are being ignored or happened in a background task.
    • Multiple crates now offer an "instrument" feature which enables instrumentation using the tracing ecosystem.
    • Moved all database() functions to StorageConnection. This allows fully generic code to be written against a "server".
    • Added listen_for_shutdown() which listens for SIGINT and SIGQUIT and attemps to shut the server down gracefully.
    • Automatic TLS certificate acquisition can be performed using ACME on TCP port
      1. This is automatically performed if the feature flag is enabled.
    • BonsaiDb server can now listen for generic TCP connections with and without TLS. This is primarily designed to allow extending the HTTP port to support additional HTTP endpoints than just websockets. However, because the TLS certificate aquired on port 443 can be used in other protocols such as SMTP, it can be useful to allow BonsaiDb to control the networking connections. listen_for_tcp_on and listen_for_secure_tcp_on accept a parameter that implements the TcpService trait. See the Axum example for how this integration can be used in conjunction with websockets.
    • Added convenience methods to Transaction and Operation to make it easier to build multi-operation transactions.
    • The Key trait is now automatically implemented for tuples of up to 8 entries in length.
    • CollectionDocument::modify() enables updating a document using a flow that will automatically fetch the document if a conflict is detected, re-invoking the callback to redo the modifications. This is useful for situations where you may have cached a document locally and wish to update it in the future, but don't want to refresh it before saving. It also will help, in general, with saving documents that might have some contention.
    • CustomServer::authenticate_client_as() allows setting a ConnectedClient's authenticated user, skipping BonsaiDb's authentication. This allows developers to still utilize the users and permissions within BonsaiDb while authenticating via some other mechanism.
    • SerializedCollection::list() and SerializedCollection::get_multiple() have been added to make it easier to retrieve CollectionDocument<T>s.

    Changed

    • Configuration has been refactored to use a builder-style pattern. bonsaidb::local::config::Configuration has been renamed StorageConfiguration, and bonsaidb::server::ServerConfiguration has been renamed ServerConfiguration.

    • Database::open_local and Storage::open_local have been renamed to open.

    • Database::open, Storage::open, and Server::open no longer take a path argument. The path is provided from the configuration.

    • Listing all schemas and databases will now include the built-in admin database.

    • The underlying dependency on sled has been changed for an in-house storage implementation nebari.

    • The command-line interface has received an overhaul.

      • A new trait, CommandLine can be implemented on a type that implements Backend to utilize the built-in, extensible command line interface. An example of this is located at ./examples/basic-server/examples/cli.rs.
      • The parameter types to execute() functions have changed.
      • This interface will receive further refinement as part of switching to clap 3.0 once it is fully released.
    • View::map now returns a Mappings instead of an Option, allowing for emitting of multiple keys.

    • View mapping now stores the source document header, not just the ID.

    • ServerConfiguration::default_permissions has been changed into a DefaultPermissions enum.

    • Changed the default serialization format from CBOR to an in-house format, Pot.

    • Key now has a new associated constant: LENGTH. If a value is provided, the result of converting the value should always produce the length specified. This new information is used to automatically implement the Key trait for tuples.

    • The Key implementation for EnumKey has been updated to use ordered-varint to minimize the size of the indexes. Previously, each entry in the view was always 8 bytes.

    • Connection and SerializedCollection have had their insert/insert_into functions modified to include an id parameter. New functions named push and push_into have been added that do not accept an id parameter. This is in an effort to keep naming consistent with common collection names in Rust.

    • Operation::insert and Command::Insert now take an optional u64 parameter which can be used to insert a document with a specific ID rather than having one chosen. If an document exists already, a conflict error will be returned.

    • bonsaidb::local::backup has been replaced with Storage::backup and Storage::restore. This backup format is incompatible with the old format, but is built with proper support for restoring at-rest encrypted collections. Backups are not encrypted, but the old implementation could not be updated to support restoring the documents into an encrypted state.

      This new functionality exposes BackupLocation, an async_trait that enables arbitrary backup locations.

    • KeyValue storage has internally changed its format. Because this was pre-alpha, this data loss was premitted. If this is an issue for anyone, the data is still there, the format of the key has been changed. By editing any database files directly using Nebari, you can change the format from "namespace.key" to "namespace\0key", where \0 is a single null byte.

    • ExecutedTransactions::changes is now a Changes enum, which can be a list of ChangedDocuments or ChangedKeys.

    • The Key-Value store is now semi-transactional and more optimized. The behavior of persistence can be customized using the key_value_persistence option when opening a BonsaiDb instance. This can enable higher performace at the risk of data loss in the event of an unexpected hardware or power failure.

    • A new trait, SerializedCollection, now controls serialization within CollectionDocument, CollectionView, and other helper methods that serialized document contents. This allows any serialization format that implements transmog::Format can be used within BonsaiDb by setting the Format associated type within your SerializedCollection implementation.

      For users who just want the default serialization, a convenience trait DefaultSerialization has been added. All types that implement this trait will automatically implement SerializedCollection using BonsaiDb's preferred settings.

    • A new trait, SerializedView, now controls serialization of view values. This uses a similar approach to SerializedCollection. For users who just want the default serialization, a convenience trait DefaultViewSerialization has been added. All types that implement this trait will automatically implement SerializedView using BonsaiDb's preferred settings.

      The view-histogram example has been updated to define a custom transmog::Format implementation rather creating a Serde-based wrapper.

    • View has been split into two traits to allow separating client and server logic entirely if desired. The ViewSchema trait is where map(), reduce(), version(), and unique() have moved. If you're using a CollectionView, the implementation should now be a combination of View and CollectionViewSchema.

    • CollectionName, SchemaName, and Name all no longer generate errors if using invalid characters. When BonsaiDb needs to use these names in a context that must be able to be parsed, the names are encoded automatically into a safe format. This change also means that View::view_name(), Collection::collection_name(), and Schema::schema_name() have been updated to not return error types.

    • The Document type has been renamed to OwnedDocument. A trait named Document has been introduced that contains most of the functionality of Document, but is now implemented by OwnedDocument in addition to a new type: BorrowedDocument. Most APIs have been updated to return OwnedDocuments. The View mapping process receives a BorrowedDocument within the map() function.

      This refactor changes the signatures of ViewSchema::map(), ViewSchema::reduce(), CollectionViewSchema::map(), and CollectionViewSchema::reduce().

      The benefit of this breaking change is that the view mapping process now can happen with fewer copies of data.

    • A new function, query_with_collection_docs() is provided that is functionally identical to query_with_docs() except the return type contains already deserialized CollectionDocument<T>s.

    • Moved CollectionDocument from bonsaidb_core::schema to bonsaidb_core::document.

    • Due to issues with unmaintained crates, X25519 has been swapped for P256 in the vault implementation. This is an intra-alpha breaking change. Use the backup functionality with the existing version of BonsaiDb to export a decrypted version of your data that you can restore into the new version of BonsaiDb.

      If you have encryption enabled but aren't actually storing any encrypted data yet, you can remove these files from inside your database:

      • mydb.bonsaidb/master-keys
      • mydb.bonsaidb/vault-keys/ (or remove the keys from your S3 bucket)
    • query_with_docs() and query_with_collection_docs() now return a MappedDocuments structure, which contains a list of mappings and a BTreeMap containing the associated documents. Documents are allowed to emit more than one mapping, and due to that design, a single document can be returned multiple times.

    Fixed

    • Adding two collections with the same name now throw an error.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0-dev.4(Jun 23, 2021)

    Welcome to the first release on GitHub. I wanted to take this moment to talk a little bit about the development of this project.

    I've added a CHANGELOG.md. I may not do much to highlight breaking changes this early on in development, but once we have a 0.1.0 release, I'll start tracking breaking changes. At this point, PliantDb is purely experimental, and while I applaud anyone attempting to use it, using it this early will likely break as I continue to develop the API.

    For the near future, I want the flexibility to change drastic things without worrying about how it impacts existing user's code. Until we have a successful decent-sized project using PliantDb, I think it's impossible to be confident everything is designed well.

    Lastly, I wanted to mention that this year seems to just fly by. I hadn't realized it was April 26 when the last release happened. I have a goal to get a new version of Cosmic Verge running on PliantDb, but we are ambitious. In the process, we are redoing our presentation layer. As such, it may have looked like I lost interest in this project and moved on.

    Fear not, this project is near and dear to my heart, which is why the first product of the Gooey project is this example which shows how to build a simple Gooey app and PliantDb server with a basic custom API. Because of the new support for WebAssembly, the example even works in a modern browser.

    If you have any questions or feedback, feel free to join the discussion on our Discourse forums.

    From the CHANGELOG

    Added

    • View::unique() has been added, allowing for a View to restrict saving documents when multiple documents would end up with the same key emitted for this view. For example, if you have a User collection and want to ensure each User has a unique email_address, you could create a View with the key of email_address and return true from unique(), and PliantDb will enforce that constraint.

    • Permissions support has been added across the platform with granular access. The pliantdb::core::permissions module contains the data types involved. More documentation and examples are to-come -- users and roles haven't been added yet.

    • The initial underpinnings of customizing the PliantDb server have been added. First, there's the Backend trait. Right now, its only purpose is to allow defining a CustomApi. This allows applications built with PliantDb to extend the network protocol with Request and Response types that just need to support serde. For a full example, check out this in-development Gooey example.

    • An initial version of a WebAssembly client is now supported. It only supports WebSockets. While there has been news of QUIC support in the browser, it's a limited implementation that only exposes an HTTP protocol. As such, it is incompatible with the PliantDb protocol. Eventually, we hope to support WebRTC as an alternative to TCP in WebAssembly. The example linked in the previous bullet point can be built and loaded in a browser.

    Source code(tar.gz)
    Source code(zip)
Owner
Khonsu Labs
Khonsu Labs
r2d2-couchdb: CouchDB support for the r2d2 connection pool

r2d2-couchdb: CouchDB support for the r2d2 connection pool

Pablo Aguiar 10 Dec 2, 2022
A Distributed SQL Database - Building the Database in the Public to Learn Database Internals

Table of Contents Overview Usage TODO MVCC in entangleDB SQL Query Execution in entangleDB entangleDB Raft Consensus Engine What I am trying to build

Sarthak Dalabehera 38 Jan 2, 2024
RefineDB - A strongly-typed document database that runs on any transactional key-value store.

RefineDB - A strongly-typed document database that runs on any transactional key-value store.

Heyang Zhou 375 Jan 4, 2023
A scalable, distributed, collaborative, document-graph database, for the realtime web

is the ultimate cloud database for tomorrow's applications Develop easier. Build faster. Scale quicker. What is SurrealDB? SurrealDB is an end-to-end

SurrealDB 16.9k Jan 8, 2023
CouchDB client-side library for the Rust programming language

Chill Chill is a client-side CouchDB library for the Rust programming language, available on crates.io. It targets Rust Stable. Chill's three chief de

null 35 Jun 26, 2022
Sofa - CouchDB for Rust

Sofa - CouchDB for Rust Documentation Here: http://docs.rs/sofa Installation [dependencies] sofa = "0.6" Description This crate is an interface to Cou

66 Origin 40 Feb 11, 2022
CouchDB client library for the Rust programming language

CouchDB This project is reborn! As of its v0.6.0 release, the couchdb crate has new life as a toolkit instead of providing a full-blown client. In a n

null 20 Jul 17, 2021
FeOphant - A SQL database server written in Rust and inspired by PostreSQL.

A PostgreSQL inspired SQL database written in Rust.

Christopher Hotchkiss 27 Dec 7, 2022
An explorer for the DeArrow database as a web application. Inspired by Lartza's SBrowser

DeArrow Browser An explorer for the DeArrow database as a web application. Inspired by Lartza's SBbrowser. Public instance available at dearrow.minibo

null 3 Aug 10, 2023
Simple document-based NoSQL DBMS from scratch

cudb (a.k.a. cuda++) Simple document-based noSQL DBMS modelled after MongoDB. (Has nothing to do with CUDA, has a lot to do with the Cooper Union and

Jonathan Lam 3 Dec 18, 2021
XLite - query Excel (.xlsx, .xls) and Open Document spreadsheets (.ods) as SQLite virtual tables

XLite - query Excel (.xlsx, .xls) and Open Document spreadsheets (.ods) as SQLite virtual tables XLite is a SQLite extension written in Rust. The main

Sergey Khabibullin 1.1k Dec 28, 2022
Document your SQLite tables and columns with in-line comments

sqlite-docs A SQLite extension, CLI, and library for documentating SQLite tables, columns, and extensions. Warning sqlite-docs is still young and not

Alex Garcia 20 Jul 2, 2023
A user crud written in Rust, designed to connect to a MySQL database with full integration test coverage.

SQLX User CRUD Purpose This application demonstrates the how to implement a common design for CRUDs in, potentially, a system of microservices. The de

null 78 Nov 27, 2022
AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust

AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust. It is designed as an experimental engine for the TiKV project, and will bring aggressive optimizations for TiKV specifically.

TiKV Project 535 Jan 9, 2023
A cross-platform terminal database tool written in Rust

gobang is currently in alpha A cross-platform terminal database tool written in Rust Features Cross-platform support (macOS, Windows, Linux) Mu

Takayuki Maeda 2.1k Jan 5, 2023
GlueSQL is a SQL database library written in Rust

GlueSQL is a SQL database library written in Rust. It provides a parser (sqlparser-rs), execution layer, and optional storage (sled) packaged into a single library.

GlueSQL 2.1k Jan 8, 2023
Distributed SQL database in Rust, written as a learning project

toyDB Distributed SQL database in Rust, written as a learning project. Most components are built from scratch, including: Raft-based distributed conse

Erik Grinaker 4.6k Jan 8, 2023
A fast and simple in-memory database with a key-value data model written in Rust

Segment Segment is a simple & fast in-memory database with a key-value data model written in Rust. Features Dynamic keyspaces Keyspace level control o

Segment 61 Jan 5, 2023
A "blazingly" fast key-value pair database without bloat written in rust

A fast key-value pair in memory database. With a very simple and fast API. At Xiler it gets used to store and manage client sessions throughout the pl

Arthur 16 Dec 16, 2022