The Declarative Data Generator

Overview

The Declarative Data Generator


docs license language build status discord Synth open source contributors


Synth is a tool for generating realistic data using a declarative data model. Synth is database agnostic and can scale to millions of rows of data.

Why Synth

Synth answers a simple question. There are so many ways to consume data, why are there no frameworks for generating data?

Synth provides a robust, declarative framework for specifying constraint based data generation, solving the following problems developers face on the regular:

  1. You're creating an App from scratch and have no way to populate your fresh schema with correct, realistic data.
  2. You're doing integration testing / QA on production data, but you know it is bad practice, and you really should not be doing that.
  3. You want to see how your system will scale if your database suddenly has 10x the amount of data.

Synth solves exactly these problems with a flexible declarative data model which you can version control in git, peer review, and automate.

Key Features

The key features of Synth are:

  • Data as Code: Data generation is described using a declarative configuration language allowing you to specify your entire data model as code.

  • Import from Existing Sources: Synth can import data from existing sources and automatically create data models. Synth currently has Alpha support for Postgres, MySQL and mongoDB!

  • Data Inference: While ingesting data, Synth automatically works out the relations, distributions and types of the dataset.

  • Database Agnostic: Synth supports semi-structured data and is database agnostic - playing nicely with SQL and NoSQL databases.

  • Semantic Data Types: Synth uses the fake-rs crate to enable the generation of semantically rich data with support for types like names, addresses, credit card numbers etc.

Status

  • Alpha: We are testing synth with a closed set of users
  • Public Alpha: Anyone can install synth. But go easy on us, there are a few kinks
  • Public Beta: Stable enough for most non-enterprise use-cases
  • Public: Production-ready

We are currently in Public Alpha. Watch "releases" of this repo to get notified of major updates.

Installation & Getting Started

On Linux and MacOS you can get started with the one-liner:

$ curl -sSL https://getsynth.com/install | sh

For more installation options, check out the docs.

Examples

Building a data model from scratch

To start generating data without having a source to import from, you need to add Synth schema files to a namespace directory:

To get started we'll create a namespace directory for our data model and call it my_app:

$ mkdir my_app

Next let's create a users collection using Synth's configuration language, and put it into my_app/users.json:

{
    "type": "array",
    "length": {
        "type": "number",
        "constant": 1
    },
    "content": {
        "type": "object",
        "id": {
            "type": "number",
            "id": {}
        },
        "email": {
            "type": "string",
            "faker": {
                "generator": "safe_email"
            }
        },
        "joined_on": {
            "type": "date_time",
            "format": "%Y-%m-%d",
            "subtype": "naive_date",
            "begin": "2010-01-01",
            "end": "2020-01-01"
        }
    }
}

Finally, generate data using the synth generate command:

$ synth generate my_app/ --size 2 | jq
{
  "users": [
    {
      "email": "[email protected]",
      "id": 1,
      "joined_on": "2014-12-14"
    },
    {
      "email": "[email protected]",
      "id": 2,
      "joined_on": "2013-04-06"
    }
  ]
}

Building a data model from an external database

If you have an existing database, Synth can automatically generate a data model by inspecting the database.

You can use the synth import command to automatically generate Synth schema files from your Postgres, MySQL or MongoDB database:

$ synth import tpch --from postgres://user:pass@localhost:5432/tpch
Building customer collection...
Building primary keys...
Building foreign keys...
Ingesting data for table customer...  10 rows done.

Finally, generate data into another instance of Postgres:

$ synth generate tpch --to postgres://user:pass@localhost:5433/tpch

Why Rust

We decided to build Synth from the ground up in Rust. We love Rust, and given the scale of data we wanted synth to generate, it made sense as a first choice. The combination of memory safety, performance, expressiveness and a great community made it a no-brainer and we've never looked back!

Get in touch

If you would like to learn more, or you would like support for your use-case, feel free to open an issue on GitHub.

If your query is more sensitive, you can email [email protected] and we'll happily chat about your usecase.

If you intend on using Synth, we would recommend joining our growing Discord community. We uphold the Synth Code of Conduct.

About Us

The Synth project is backed by OpenQuery. We are a YCombinator backed startup based in London, England. We are passionate about data privacy, developer productivity, and building great tools for software engineers.

Contributing

First of all, we sincerely appreciate all contributions to Synth, large or small so thank you.

See the contributing section for details.

License

Synth is source-available and licensed under the Apache 2.0 License.

Contributors โœจ

Thanks goes to these wonderful people (emoji key):


Christos Hadjiaslanis

๐Ÿ“ ๐Ÿ’ผ ๐Ÿ’ป ๐Ÿ–‹ ๐ŸŽจ ๐Ÿ“– ๐Ÿ” ๐Ÿค” ๐Ÿš‡ ๐Ÿšง ๐Ÿ“ฆ ๐Ÿ‘€ ๐Ÿ›ก๏ธ โš ๏ธ ๐Ÿ“ข

Nodar Daneliya

๐Ÿ“ ๐Ÿ’ผ ๐Ÿ–‹ ๐ŸŽจ ๐Ÿ“– ๐Ÿ” ๐Ÿค”

llogiq

๐Ÿ’ผ ๐Ÿ’ป ๐Ÿ–‹ ๐Ÿค” ๐Ÿš‡ ๐Ÿšง ๐Ÿง‘โ€๐Ÿซ ๐Ÿ‘€ ๐Ÿ›ก๏ธ โš ๏ธ

Dmitri Shkurski

๐Ÿ’ป

Damien Broka

๐Ÿ“ ๐Ÿ’ผ ๐Ÿ’ป ๐Ÿ–‹ ๐ŸŽจ ๐Ÿ“– ๐Ÿ” ๐Ÿค” ๐Ÿš‡ ๐Ÿšง ๐Ÿ‘€ โš ๏ธ

fretz12

๐Ÿค” ๐Ÿ’ป ๐Ÿ“– โš ๏ธ

Tyler Bailey

๐Ÿ’ป ๐Ÿ“–

Jรบnior Bassani

๐Ÿ› ๐Ÿ’ป

Daniel Hofstetter

๐Ÿ› ๐Ÿ’ป

This project follows the all-contributors specification. Contributions of any kind welcome!

Comments
  • UX: Make synth init obsolete

    UX: Make synth init obsolete

    Required Functionality Currently, users must initialize a "workspace" before using synth to generate data. The rationale for this was that synth would be able to write files for both import and possibly generation (e.g. with #33) and having a directory for itself would reduce the risk of accidentally overwriting files. However, both requiring a call to "synth init" and the generated directories, even if empty, make it harder to set up tests and worsen the user experience.

    Proposed Solution Recognizing that we put the baby out with the bathwater, we should not require user interaction unless there's a problem. So at the very least we could remove the synth init command and subsequent check for the .synth directory and instead add a prompt to ask the user whenever overwriting a file. Even better, we could record the files and creation timestamp whenever synth creates a new file and not issue a prompt when a file of that list gets overwritten. We should however keep the list of generated files somewhat short; storing up to 1000 files should be enough for everybody (famous last words). When the list gets full, we could check for removed files, otherwise remove the oldest entries.

    Use case Improve user experience, simplify testing and reproducibility (because the tests no longer need to clean up .synth folders).

    enhancement core 
    opened by llogiq 9
  • Synth import extension for creating DB schema level collections and filter out non table objects

    Synth import extension for creating DB schema level collections and filter out non table objects

    Required Functionality

    Proposing adding schema to synth import command to build a data model from your Postgres database schema. Currently it goes to public schema and works but need a way to specify schema. Also, when it finds non table objects in schema, it continues to fail. I had to remove those objects temporarily for import to work.

    $ synth import tpch --from postgres://user:pass@localhost:5432/tpch Building customer collection... Building primary keys... Building foreign keys... Ingesting data for table customer... 10 rows done.

    Proposed Solution Provide a switch for selecting schema from existing database. If you can also add functionality to add table section would be super. Besides while selecting schema, based on integrity constraints, if it can add dependencies to collection automatically would help save lots of development time.

    Use case Have many schema in existing database and would like to select schema. It continues to fail when it encounters non table kind of objects. Also, adding integrity constraints to each collection is very laborious task.

    enhancement help wanted 
    opened by hembhav 8
  • Custom panic handler

    Custom panic handler

    Required Functionality Whenever synth panics, the user should get the option to send an automated bug message with another option to give their contact info so we can get back to them. This will help our users be more active when bugs occur and help us get more and better bug reports.

    Proposed Solution A custom panic handler can get some information (synth version, operating system, etc.) to format and send to an endpoint we need to set up.

    Use case Improve our reaction to bugs, make synth better faster

    opened by llogiq 7
  • Feature: CSV import/export

    Feature: CSV import/export

    Required Functionality An import and export of CSV files.

    Proposed Solution The CSV format has some surprising complexity. Nonetheless, an import could at least use headers (if any) of a CSV file and try to match certain things about the values (e.g. are they all digits?). The export should have an option to declare the delimiter and perhaps quoting / escaping.

    Use case

    • Creating a schema (that can later be refined) from a set of CSV files
    • Generating a set of CSV files from a schema.
    help wanted integration 
    opened by llogiq 7
  • Telemetry Builds Failing

    Telemetry Builds Failing

    Describe the bug The telemetry builds fail because of a typo.

    To Reproduce

    1. Build with telemetry feature
    2. See error
    error[E0432]: unresolved import `crate::utils::META_OS`
      --> synth/src/cli/telemetry.rs:20:5
       |
    20 | use crate::utils::META_OS;
       |     ^^^^^^^^^^^^^^^^^^^^^ no `META_OS` in `utils`
    

    I will make a PR that fixes this. I will also have to make a new release.

    bug ci 
    opened by iamwacko 6
  • Utilize weights in OneOf

    Utilize weights in OneOf

    1. Utilizes the weights when choosing the next variant.
    2. Use swap_remove which have O(1) since we don't care about the order.

    Small demo

    synth/examples/random_variants on ๎‚  hbina-ISSUE-149-utilize-weights-in-oneof on โ˜๏ธ  (ap-southeast-1) 
    โฏ cargo run --bin synth -- generate random --size 1 | jq | rg '1' | wc -l
    warning: /home/hbina/git/synth/synth/Cargo.toml: unused manifest key: target.i686-pc-windows-msvc.rustflags
    warning: /home/hbina/git/synth/synth/Cargo.toml: unused manifest key: target.x86_64-pc-windows-msvc.rustflags
        Finished dev [unoptimized + debuginfo] target(s) in 0.12s
         Running `/home/hbina/git/synth/target/debug/synth generate random --size 1`
    9930
    
    synth/examples/random_variants on ๎‚  hbina-ISSUE-149-utilize-weights-in-oneof on โ˜๏ธ  (ap-southeast-1) 
    โฏ cargo run --bin synth -- generate random --size 1 | jq | rg '2' | wc -l
    warning: /home/hbina/git/synth/synth/Cargo.toml: unused manifest key: target.i686-pc-windows-msvc.rustflags
    warning: /home/hbina/git/synth/synth/Cargo.toml: unused manifest key: target.x86_64-pc-windows-msvc.rustflags
        Finished dev [unoptimized + debuginfo] target(s) in 0.11s
         Running `/home/hbina/git/synth/target/debug/synth generate random --size 1`
    70
    

    Closes https://github.com/getsynth/synth/issues/149

    opened by hbina 6
  • Feature: Scheduler / Topological sorting namespaces

    Feature: Scheduler / Topological sorting namespaces

    Required Functionality synth as of now requires the namespaces to be in order when having dependencies (for example by foreign key relations in a database) between them. A topological sort would solve this.

    Proposed Solution The most used algorithm is Kahn's which the toposort-scc crate already implements. An implementation could either (in accordance with the Apache-2.0 license) be derived from that or depend on the crate.

    Use case With the topological sort and a function to know whether a namespace depends on another, we may even parallelize generation for multiple namespaces.This would also incidentally allow us to emit a good error message if the dependency graph between namespaces is cyclic.

    enhancement core 
    opened by llogiq 6
  • Schema Values List not same

    Schema Values List not same

    Hi, I created a schema with 11 fields of different types. The PG table consist of a serial field + 11 fields with same names and types as the schema .

    Now when I try generate to the postgres : synth generate --collection test_data --to postgres://admin:1122@localhost:5432/data ~/db_test/data/

    I get the error as mentioned below : - Caused by: 0: db error: ERROR: VALUES lists must all be the same length 1: ERROR: VALUES lists must all be the same length

    When I pipe the generated values to a normal file ...I noticed that some row have 10values...and some rows even have 9 values.. So not all the generated rows have 11 values. Why this happens ....how to solve ?

    1. Schema (if applicable)
    	"type":"array",
    	"length":{
    		"type":"number",
    		"subtype":"u64",
    		"constant":10000
    	},
    	"content":{
    		"type":"object",
    		
    		"special_id":{
    			"optional":false,
    			"type":"number",
    			"subtype":"u64",
    			"range":{
    				"low":1,
    				"high":20000,
    				"step":2
    			}
    		},
    		"first_name":{"optional":false,"type":"string","faker":{"generator":"first_name"}},
    		"surname":{"optional":false,"type":"string","faker":{"generator":"last_name"}},
    		"nickname":{"optional":false,"type":"string","faker":{"generator":"sentence"}},
    		"ages":{
    			"optional":false,
    			"type":"number",
    			"subtype":"u64",
    			"range":{
    				"low":18,
    				"high":45,
    				"step":1
    			}			
    		},
    		"department":{
    			"optional":false,
    			"type":"string",
    			"pattern": "(Accounting|Managment|IT||Sales)"
    		},
    		"job":{
    			"optional":false,
    			"type":"string",
    			"faker":{
    				"generator":"job"
    			}
    		},
    		"ident":{
    			"optional": true,
    			"type":"string",
    			"uuid":{}
    		},
    		"email_address":{
    			"optional":false,
    			"type":"string",
    			"faker":{
    				"generator":"company_email"
    			}
    		},
    		"company_web":{
    			"optional":true,
    			"type":"string",
    			"faker":{
    				"generator":"safe_domain_name"
    			}
    		},
    		"active":{
    			"type":"bool",
    			"frequency":0.5
    		}
    	}	
    }
    
    
    bug 
    opened by beautybird 6
  • Tests failing

    Tests failing

    Describe the bug One of the tests on .json from a tutorial is failing.

    To Reproduce Steps to reproduce the behavior: Have the cachix github action run.

    See error

    ---- docs_blog_2021_08_31_seeding_databases_tutorial_dot_md stdout ----
    /build/source/synth/tmp/docs/blog/2021-08-31-seeding-databases-tutorial:224 has a JSON only code block that will be skipped
    thread 'docs_blog_2021_08_31_seeding_databases_tutorial_dot_md' panicked at 'Unable to open the namespace "docs/blog/2021-08-31-seeding-databases-tutorial"
    
    Caused by:
        0: at file docs/blog/2021-08-31-seeding-databases-tutorial/735.json
        1: Failed to parse collection
        2: expected `,` or `}` at line 4 column 38
    should contain one of the following errors: {
        "unknown variant `date_time`",
    }', synth/tests/docs.rs:75:9
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    
    
    failures:
        docs_blog_2021_08_31_seeding_databases_tutorial_dot_md
    

    Expected behavior No error was expected.

    Environment (please complete the following information): Github Actions

    Additional context The only change between failing and passing was a minor change to documentation. I don't know why it fails now.

    bug 
    opened by iamwacko 5
  • Failed Compilation

    Failed Compilation

    Describe the bug Nightly changed it's API so .chain() is now .sources(). This broke compilation.

    To Reproduce Steps to reproduce the behavior:

    1. cargo +nightly build --bin synth
    2. See error
    |
    44 |         let mut chain = original.chain().collect::<Vec<_>>();
       |                                  ^^^^^ `&(dyn StdError + 'static)` is not an iterator
       |
      ::: /home/runner/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/error.rs:31:1
       |
    31 | pub trait Error: Debug + Display {
       | -------------------------------- doesn't satisfy `(dyn StdError + 'static): Iterator`
       |
       = note: the following trait bounds were not satisfied:
               `&(dyn StdError + 'static): Iterator`
               which is required by `&mut &(dyn StdError + 'static): Iterator`
               `(dyn StdError + 'static): Iterator`
               which is required by `&mut (dyn StdError + 'static): Iterator`
    

    Expected behavior Successful compilation.

    Additional context This can be fixed easily by changing .chain() to .sources(). I will make a PR fixing this soon.

    bug ci 
    opened by iamwacko 5
  • Support JSON Lines format (.jsonl)

    Support JSON Lines format (.jsonl)

    Required Functionality Currently JSON is supported and CSV is going to be supported (https://github.com/getsynth/synth/issues/33), well JSON Lines is like CSV, but with the power of JSON, it's very used to export big chunks of data with columns with complex values not supported by CSV, and where JSON is inefficient. Because one line is one record like CSV, it's much efficient than JSON to import, and it's used among other things because it's really easy to build a JSON line from a JSON file that is filtered / post-processed with tools like jq.

    Proposed Solution Support it, and because only one type of data like CSV is supported, either assume that the file only has one type of data, or allow to set as argument the name of the column that determine the type of data (this also applies to CSV). E.g. in CouchDB there are no tables or collections, everything is store in the same database, but by convention the field "type" is used to discriminate what is the type of the record: users, transactions, claims...

    Use case As mentioned above, it's easy to generate and work with JSON Lines. E.g. here is a transformation example made with jq. :

    $ echo '{ "transactions": [ {"tx":123, "amount": 100, "cust_id": 444}, {"tx":123, "amount": 100, "cust_id": 444} ]  }' \
        | jq '.transactions[]|{tx:.tx,val:.amount}' -c
    {"tx":123,"val":100}
    {"tx":123,"val":100}
    

    Another example: although the documentation of mongoexport says it export to JSON, in reality the output format is JSON Lines.

    enhancement integration 
    opened by mrsarm 5
  • Synth TS Integration

    Synth TS Integration

    Required Functionality Make some of Synth usable from typescript.

    Proposed Solution Synth as a native Deno module, probably with the ability to call a Synth server to work on the browser.

    Use case Graphgen has expressed interest in using some of our internals.

    enhancement integration 
    opened by iamwacko 0
  • Support bytea column

    Support bytea column

    Required Functionality I want to use synth on a PostgreSQL DB, however, it does not feature a converter for bytea.

    Proposed Solution Implement a converter for bytea.

    Use case I want to import my database in synth, but I'm currently blocked. I'm not interested in a representative generated value for the columns, a default value would suffice.

    opened by jeenkhoorn 0
  • Swagger integration

    Swagger integration

    Required Functionality I would appreciate for the functionality of new import - import from Swagger spec (and Async / CloudEvents if this is possible). That would help a lot with data generation for mocks/stubs and performance testing

    Proposed Solution

    Use case Generating mocks/stubs for performance testing of APIs and fast data for POC validations

    integration 
    opened by gron1987 1
  • partial import support

    partial import support

    Required Functionality Importing DB scheme is a complex task with many possible points of failure. I always believe that something is better than nothing - Allowing to ignore errors and import part of the scheme can become very handy. In some cases it would be even worth to fill the gaps manually until the DB will have better support.

    Proposed Solution There are two main options as I see it, Synth may offer both:

    • skip-failed-tables - build partial scheme, skipping the problematic tables and other tables dependent on them.
    • ignore-errors - add the failed tables with missing data so the user can fill the missing. generate should validate that the user fixed the problems and exit accordingly.

    Both options should be suggested to the user upon import failure, along with detailed report contains the problematic table/column name and cause - if the table was skipped because it dependent on problematic table - it should be specified in the cause

    Use case

    • Scan fails, the user see that the failure is related to two small tables and chooses to use skip-failed-tables as suggested in the error
    • Scan fails, the user see that many tables are were skipped because they are depended in one problematic table. The user chooses to use the ignore-errors and fix the problematic table
    opened by zvif-orca 0
  • collect all import errors before exiting

    collect all import errors before exiting

    Required Functionality When attempting to import a database the could be potentially more than a single issue. Exiting after the first import issue can be frustrating as a single issue may hide many others. The import should keep running, keep collecting errors and finally exit with a list of problems.

    Proposed Solution

    • Collect errors to array during the import
    • Before dumping the scheme, check if the array is not empty
    • Finally exit listing all the collected import errors

    Use case I have few postgres DBs, I got unimplemented converter errors - one on each DB. I had to write small script that collects all columns types from all the DBs and look at Synth source to understand which other types are yet supported, as the first error is masking the others. Potentially, there could be other errors except unsupported columns, so the adaptation of Synth to my DBs could last for long time.

    enhancement 
    opened by zvif-orca 0
Releases(v0.6.9)
Owner
shuttle
Making Rust the next language of cloud-native
shuttle
a tokio-enabled data store for triple data

terminusdb-store, a tokio-enabled data store for triple data Overview This library implements a way to store triple data - data that consists of a sub

TerminusDB 307 Dec 18, 2022
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023
Implements the packet parser for Gran Turismo 7 telemetry data, allowing a developer to retrieve data from a running game.

gran-turismo-query Implements the packet parser for Gran Turismo 7 telemetry data, allowing a developer to retrieve data from a running game. Features

Carlos Menezes 3 Dec 11, 2023
Materialize simplifies application development with streaming data. Incrementally-updated materialized views - in PostgreSQL and in real time. Materialize is powered by Timely Dataflow.

Materialize is a streaming database for real-time applications. Get started Check out our getting started guide. About Materialize lets you ask questi

Materialize, Inc. 4.7k Jan 8, 2023
๐ŸธSlippi DB ingests Slippi replays and puts the data into a SQLite database for easier parsing.

The primary goal of this project is to make it easier to analyze large amounts of Slippi data. Its end goal is to create something similar to Ballchasing.com but for Melee.

Max Timkovich 20 Jan 2, 2023
A Key-Value data storage system. - dorea db

Dorea DB ?? Dorea is a key-value data storage system. It is based on the Bitcask storage model Documentation | Crates.io | API Doucment ็ฎ€ไฝ“ไธญๆ–‡ | English

ZhuoEr Liu 112 Dec 2, 2022
Hypergraph is data structure library to create a directed hypergraph in which a hyperedge can join any number of vertices.

Hypergraph is data structure library to create a directed hypergraph in which a hyperedge can join any number of vertices.

Davy Duperron 224 Dec 21, 2022
Blazingly fast data generation & seeding for MongoDB

Planter Blazingly fast and simple data generation & seeding for MongoDB Installation Use the package manager cargo to install planter. Add the followi

Valencian Digital 4 Jan 12, 2022
Zenith substitutes PostgreSQL storage layer and redistributes data across a cluster of nodes

Zenith substitutes PostgreSQL storage layer and redistributes data across a cluster of nodes

null 5.7k Jan 6, 2023
It's not a novel data sturcture just AVL and Btree for rust

This crate named as ABtree but this not means it is a novel data sturcture. Itโ€™s just AVL tree and Btree. For the Btree, what makes it different from

GuoHao 3 Jun 20, 2022
RedisJSON - a JSON data type for Redis

RedisJSON RedisJSON is a Redis module that implements ECMA-404 The JSON Data Interchange Standard as a native data type. It allows storing, updating a

null 3.4k Jan 1, 2023
A Rust application that inserts Discogs data dumps into Postgres

Discogs-load A Rust application that inserts Discogs data dumps into Postgres. Discogs-load uses a simple state machine with the quick-xml Rust librar

Dylan 7 Dec 9, 2022
SQLite compiled to WASM with pluggable data storage

wasm-sqlite SQLite compiled to WASM with pluggable data storage. Useful to save SQLite in e.g. Cloudflare Durable Objects (example: https://github.com

Markus Ast 36 Dec 7, 2022
Databend aimed to be an open source elastic and reliable serverless data warehouse,

An elastic and reliable Serverless Data Warehouse, offers Blazing Fast Query and combines Elasticity, Simplicity, Low cost of the Cloud, built to make the Data Cloud easy

Datafuse Labs 5k Jan 3, 2023
Open Data Access Layer that connect the whole world together

OpenDAL Open Data Access Layer that connect the whole world together. Status OpenDAL is in alpha stage and has been early adopted by databend. Welcome

Datafuse Labs 302 Jan 4, 2023
postgres-ical - a PostgreSQL extension that adds features related to parsing RFC-5545 ยซ iCalendar ยป data from within a PostgreSQL database

postgres-ical - a PostgreSQL extension that adds features related to parsing RFC-5545 ยซ iCalendar ยป data from within a PostgreSQL database

Edgar Onghena 1 Feb 23, 2022
Plugin for macro-, mini-quad (quads) to save data in simple local storage using Web Storage API in WASM and local file on a native platforms.

quad-storage This is the crate to save data in persistent local storage in miniquad/macroquad environment. In WASM the data persists even if tab or br

ilya sheprut 9 Jan 4, 2023
Seed your development database with real data โšก๏ธ

Seed Your Development Database With Real Data โšก๏ธ Replibyte is a blazingly fast tool to seed your databases with your production data while keeping sen

Qovery 3.4k Jan 2, 2023
PRQL is a modern language for transforming data โ€” a simpler and more powerful SQL

PRQL Pipelined Relational Query Language, pronounced "Prequel". PRQL is a modern language for transforming data โ€” a simpler and more powerful SQL. Lik

PRQL 6.5k Jan 5, 2023