Next-generation developer-first NoSQL database

Overview

shields badge pypi

Next-generation developer-first NoSQL database.

AnnaDB moves familiar programming languages' patterns into the databases world to solve the problem of the relations:

Every object and sub-object (item of a vector or map) that was stored in AnnaDB has a link id. This link can be placed as a field value of any other object and the database will fetch and process it automatically on all the operations without additional commands in queries.

Features:

  • Flexible object structure
  • Relations
  • Transactions
Comments
  • "TypeError: Not bool" on the initial insert

    Hi,

    Please find the traceback for the crash on the very first insert into AnnaDB: https://gist.github.com/taurus-forever/56492ae09c13b94e1a4788241d432a34

    The command was copied from the documentation: https://annadb.dev/documentation/insert/#example

    opened by taurus-forever 3
  • How to get a single entry by using a UUID instead of a

    How to get a single entry by using a UUID instead of a "link"

    Hi there,

    I wonder how I would get a single entry by using a UUID in Python. The code below gives me this error: "Get query can contain only links"

    from annadb import Connection
    from uuid import UUID
    
    db = Connection.from_connection_string("annadb://localhost:10001")
    
    
    def get(container: str, id: UUID) -> None:
        container = db[container]
        response = container.get_one(id).run()
        print(response)
        
    
    get("category", "bd6b60a1-02c3-401f-954b-f136b7f62b25")
    
    opened by tibeer 2
  • Python Client ImportError cannot import name NoneType

    Python Client ImportError cannot import name NoneType

    $ annadb --uri annadb://playground.annadb.dev:10001
    Traceback (most recent call last):
      File "c:\python39\lib\runpy.py", line 197, in _run_module_as_main
        return _run_code(code, main_globals, None,
      File "c:\python39\lib\runpy.py", line 87, in _run_code
        exec(code, run_globals)
      File "C:\Python39\Scripts\annadb.exe\__main__.py", line 4, in <module>
      File "c:\python39\lib\site-packages\annadb\__init__.py", line 18, in <module>
        from annadb.connection import Connection
      File "c:\python39\lib\site-packages\annadb\connection.py", line 6, in <module>
        from annadb.dump import to_str
      File "c:\python39\lib\site-packages\annadb\dump.py", line 1, in <module>
        from types import NoneType
    ImportError: cannot import name 'NoneType' from 'types' (c:\python39\lib\types.py)
    

    https://github.com/roman-right/AnnaDB/blob/17e85f13532f864911586d9cf1b1e734d762de4b/drivers/python/annadb/dump.py#L1

    I am not much of a python developer, but I think this StackOverflow answer might also help:

    https://stackoverflow.com/a/15844751/1707323

    There is no longer a NoneType reference in the types modules. You should just check for identity with None directly, i.e. obj is None. An alternative way, if you really need the NoneType, would be to get it using:

    NoneType = type(None)
    

    This is actually the exact same way types.NoneType was previously defined, before it was removed on November 28th, 2007.

    opened by amaster507 0
  • Feature Request: Conditional Operations

    Feature Request: Conditional Operations

    I am thinking of a use case for this database, and one point is building a layer of abstration on top of this underlying simplicity. One of my goals is to reduce the data footprint by storing data with singularity. In order to do that though, I would need to be able to do a single operation where I have a value and I want to either insert it as a new item and return the item id, or if it already exists, I want to return the item id of the existing item and not insert it.

    Here is what that might look like in TySON:

    collection|strings|:q[
      find[
        eq{root:s|foo|}
      ],
      if[
        lt{count:n|1|},
        insert[s|foo|]
      ]
    ]
    
    enhancement 
    opened by amaster507 0
  • Collection Context Lost when joined

    Collection Context Lost when joined

    There are times, when the collection context would be beneficial. Knowing not just the data that is linked to, but that it is linked data from another collection. How does one know they are updating data from a different collection. This is assuming that when inserting data with links, the linked data nodes are not copied but are just "linked"

    I propose some kind of structure where you keep the context of the linked collection in the tree.

    Let me know if you need some examples of my thoughts and reasonings here.

    opened by amaster507 4
  • Query Error Handling Pointing

    Query Error Handling Pointing

    image

    I think the error was trying to point to the test|... part of the string, but it did not point there successfully probably due to the query being parsed out and line separated. I expect this to point to the correct part of the string which can be seen if I follow the arrow up to the original query.

    Furthermore, all I was missing was the ending pipe | which I have done quite often while working with this syntax. That specific error I believe could be parsed out better and almost know that a pipe is missing. This might even be able to use a regex noting that the pipes do not exist evenly

    bug 
    opened by amaster507 1
  • Bug: Allow or Somehow Handle Recursion - Potential Data Loss

    Bug: Allow or Somehow Handle Recursion - Potential Data Loss

    What you did:

    collection|person|:insert[
      m{s|name|:s|Foo|},
      m{s|name|:s|Bar|}
    ]
    

    Let's say these were generated with the ids:

    Foo: f35dbd14-65e5-482f-a6e3-82c18b1aaaa7 Bar: b70fd0a3-a780-4b0c-a095-c832b4261809

    I then inversely connected them together so that I could see that they were mutual friends

    collection|person|:q[get[person|f35dbd14-65e5-482f-a6e3-82c18b1aaaa7|],update[set{value|friends|:v[person|b70fd0a3-a780-4b0c-a095-c832b4261809|]}]]
    collection|person|:q[get[person|b70fd0a3-a780-4b0c-a095-c832b4261809|],update[set{value|friends|:v[person|f35dbd14-65e5-482f-a6e3-82c18b1aaaa7|]}]]
    

    The updates were successfully applied, but then I tried to query them:

    collection|person|:get[
      person|f35dbd14-65e5-482f-a6e3-82c18b1aaaa7|,
      person|b70fd0a3-a780-4b0c-a095-c832b4261809|
    ]
    

    What Happened

    But this created an error: Fetch recursion error

    image

    If I didn't know what to update to remove the recursion, I would have no way to research it. I can no longer query these nodes, and I would might as well delete them and start over.

    Ideas:

    I think loops should be allowable. I was thinking about this, and think the better way to handle it would be depth limiting. And why I think loops would be a good thing is when you get into graphs (looking at your roadmap) you will want to be able to traverse a graph up and down.

    bug 
    opened by amaster507 1
  • Feature Request: Node References and Nested Inserts

    Feature Request: Node References and Nested Inserts

    What you wanted to do

    Nested inserts of nodes. Insert items into a collection and in a nested manner, insert items into the same or different collections.

    What you did instead

    I could insert vectors and maps, but these vectors and maps would all be part of the same item in the parent collection and could not be queried from other collections.

    collection|person|:q[
      insert[
        person|foo|:m{
          s|name|:s|Foo|
        }
      ],
      insert[
        m{
          s|name|:s|Bar|,
          s|friends|:v[
            person|foo|
          ]
        }
      ]
    ]
    

    The work around to this is to do back to back inserts into collections and handle the linking in subsequent queries insert operations.

    Ideas on what this might look like

    What if inserts could be cascaded. Right now it seems nothing can come in the pipe after an insert. But what if you could do something like this for starters:

    collection|person|:q[
      insert[
        person|foo|:m{
          s|name|:s|Foo|
        }
      ],
      insert[
        m{
          s|name|:s|Bar|,
          s|friends|:v[
            person|foo|
          ]
        }
      ]
    ]
    

    The person|foo| is a temporary id that is not persisted. It is used for cross reference in the same transaction. And then if you could do that then you could also probably do something like:

    collection|person|:insert[
      m{
        s|name|:s|Bar|,
        s|friends|:v[
          person|:m{
            s|name|:s|Foo|
          }
        ]
      }
    ]
    

    And combine these two to be able to create inverse relationships (see issue __):

    collection|person|:insert[
      person|bar|:m{
        s|name|:s|Foo|,
        s|friends|:v[
          person|foo|:m{
            s|name|:s|Foo|,
            s|friends|:v[
              person|bar|
            ]
          }
        ]
      }
    ]
    

    The person|foo| in this query is not being referenced anywhere else so in theory, this could be person| and work the same way.

    And this would also allow to insert more than one collection at a time. This is where it might get complicated not knowing how the resolvers for insert work.

    collection|post|:insert[
      post|p1|:m{
        s|title|:s|Check Out AnnaDB!|,
        s|comments|:v[
          comment|c1|:m{
            s|text|:s|This is pretty awesome!|,
            s|onPost|:post|p1|
            replies:v[
              comment|:m{
                s|text|:s|For sure!|,
                s|replyTo|:comment|c1|
              }
            ]
          }
        ]
      }
    ]
    
    enhancement 
    opened by amaster507 1
Releases(0.1.0)
Owner
Roman Right
Python Developer | Author of Beanie - MongoDB ODM
Roman Right
Skytable is an extremely fast, secure and reliable real-time NoSQL database with automated snapshots and TLS

Skytable is an effort to provide the best of key/value stores, document stores and columnar databases, that is, simplicity, flexibility and queryability at scale. The name 'Skytable' exemplifies our vision to create a database that has limitless possibilities. Skytable was previously known as TerrabaseDB (and then Skybase) and is also nicknamed "STable", "Sky" and "SDB" by the community.

Skytable 1.4k Dec 29, 2022
Simple document-based NoSQL DBMS from scratch

cudb (a.k.a. cuda++) Simple document-based noSQL DBMS modelled after MongoDB. (Has nothing to do with CUDA, has a lot to do with the Cooper Union and

Jonathan Lam 3 Dec 18, 2021
A Distributed SQL Database - Building the Database in the Public to Learn Database Internals

Table of Contents Overview Usage TODO MVCC in entangleDB SQL Query Execution in entangleDB entangleDB Raft Consensus Engine What I am trying to build

Sarthak Dalabehera 38 Jan 2, 2024
fdb — my first database

fdb — my first database Tiny database for a school project. Dataset for tests Though this database supports arbitrary user-defined schemas, while bein

Luiz Felipe Gonçalves 4 Feb 25, 2023
Implements the packet parser for Gran Turismo 7 telemetry data, allowing a developer to retrieve data from a running game.

gran-turismo-query Implements the packet parser for Gran Turismo 7 telemetry data, allowing a developer to retrieve data from a running game. Features

Carlos Menezes 3 Dec 11, 2023
Aggregatable Distributed Key Generation

Aggregatable DKG and VUF WARNING: this code should not be used in production! Implementation of Aggregatable Distributed Key Generation, a distributed

Kobi Gurkan 38 Nov 30, 2022
Blazingly fast data generation & seeding for MongoDB

Planter Blazingly fast and simple data generation & seeding for MongoDB Installation Use the package manager cargo to install planter. Add the followi

Valencian Digital 4 Jan 12, 2022
Experimental blockchain database

A database for the blockchain. Design considerations API The database is a universal key-value storage that supports transactions. It does not support

Parity Technologies 172 Dec 26, 2022
Immutable Ordered Key-Value Database Engine

PumpkinDB Build status (Linux) Build status (Windows) Project status Usable, between alpha and beta Production-readiness Depends on your risk toleranc

null 1.3k Jan 2, 2023
Distributed transactional key-value database, originally created to complement TiDB

Website | Documentation | Community Chat TiKV is an open-source, distributed, and transactional key-value database. Unlike other traditional NoSQL sys

TiKV Project 12.4k Jan 3, 2023
small distributed database protocol

clepsydra Overview This is a work-in-progress implementation of a core protocol for a minimalist distributed database. It strives to be as small and s

Graydon Hoare 19 Dec 2, 2021
A user crud written in Rust, designed to connect to a MySQL database with full integration test coverage.

SQLX User CRUD Purpose This application demonstrates the how to implement a common design for CRUDs in, potentially, a system of microservices. The de

null 78 Nov 27, 2022
Rust version of the Haskell ERD tool. Translates a plain text description of a relational database schema to dot files representing an entity relation diagram.

erd-rs Rust CLI tool for creating entity-relationship diagrams from plain text markup. Based on erd (uses the same input format and output rendering).

Dave Challis 32 Jul 25, 2022
AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust

AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust. It is designed as an experimental engine for the TiKV project, and will bring aggressive optimizations for TiKV specifically.

TiKV Project 535 Jan 9, 2023
A programmable document database inspired by CouchDB written in Rust

PliantDB PliantDB aims to be a Rust-written, ACID-compliant, document-database inspired by CouchDB. While it is inspired by CouchDB, this project will

Khonsu Labs 718 Dec 31, 2022
🐸Slippi DB ingests Slippi replays and puts the data into a SQLite database for easier parsing.

The primary goal of this project is to make it easier to analyze large amounts of Slippi data. Its end goal is to create something similar to Ballchasing.com but for Melee.

Max Timkovich 20 Jan 2, 2023
A cross-platform terminal database tool written in Rust

gobang is currently in alpha A cross-platform terminal database tool written in Rust Features Cross-platform support (macOS, Windows, Linux) Mu

Takayuki Maeda 2.1k Jan 5, 2023
Pure rust embeddable key-value store database.

MHdb is a pure Rust database implementation, based on dbm. See crate documentation. Changelog v1.0.3 Update Cargo.toml v1.0.2 Update Cargo.toml v1.0.1

Magnus Hirth 7 Dec 10, 2022
influxdb provides an asynchronous Rust interface to an InfluxDB database.

influxdb influxdb provides an asynchronous Rust interface to an InfluxDB database. This crate supports insertion of strings already in the InfluxDB Li

null 9 Feb 16, 2021