An easy-to-use, zero-downtime schema migration tool for Postgres

Overview

Reshape

Test status badge Latest release

Reshape is an easy-to-use, zero-downtime schema migration tool for Postgres. It automatically handles complex migrations that would normally require downtime or manual multi-step changes. During a migration, Reshape ensures both the old and new schema are available at the same time, allowing you to gradually roll out your application. It will also perform all changes without excessive locking, avoiding downtime caused by blocking other queries. For a more thorough introduction to Reshape, check out the introductory blog post.

Designed for Postgres 12 and later.

Note: Reshape is experimental and should not be used in production. It can (and probably will) break your application.

How it works

Reshape works by creating views that encapsulate the underlying tables, which your application will interact with. During a migration, Reshape will automatically create a new set of views and set up triggers to translate inserts and updates between the old and new schema. This means that every deployment is a three-phase process:

  1. Start migration (reshape migrate): Sets up views and triggers to ensure both the new and old schema are usable at the same time.
  2. Roll out application: Your application can be gradually rolled out without downtime. The existing deployment will continue using the old schema whilst the new deployment uses the new schema.
  3. Complete migration (reshape complete): Removes the old schema and any intermediate data and triggers.

If the application deployment fails, you should run reshape abort which will roll back any changes made by reshape migrate without losing data.

Getting started

Installation

Binaries

Binaries are available for macOS and Linux under Releases.

Cargo

Reshape can be installed using Cargo (requires Rust 1.58 or later):

cargo install reshape

Docker

Reshape is available as a Docker image on Docker Hub.

docker run -v $(pwd):/usr/share/app fabianlindfors/reshape reshape migrate

Creating your first migration

Each migration should be stored as a separate file in a migrations/ directory. The files can be in either JSON or TOML format and the name of the file will become the name of your migration. We recommend prefixing every migration with an incrementing number as migrations are sorted by file name.

Let's create a simple migration to set up a new table users with two fields, id and name. We'll create a file called migrations/1_create_users_table.toml:

[[actions]]
type = "create_table"
name = "users"
primary_key = ["id"]

	[[actions.columns]]
	name = "id"
	type = "INTEGER"
	generated = "ALWAYS AS IDENTITY"

	[[actions.columns]]
	name = "name"
	type = "TEXT"

This is the equivalent of running CREATE TABLE users (id INTEGER GENERATED ALWAYS AS IDENTITY, name TEXT).

Preparing your application

Reshape relies on your application using a specific schema. When establishing the connection to Postgres in your application, you need to run a query to select the most recent schema. This query can be generated using: reshape generate-schema-query.

To pass it along to your application, you can for example use an environment variable in your run script: RESHAPE_SCHEMA_QUERY=$(reshape generate-schema-query). Then in your application:

# Example for Python
reshape_schema_query = os.getenv("RESHAPE_SCHEMA_QUERY")
db.execute(reshape_schema_query)

Running your migration

To create your new users table, run:

reshape migrate --complete

We use the --complete flag to automatically complete the migration. During a production deployment, you should first first run reshape migrate followed by reshape complete once your application has been fully rolled out.

If nothing else is specified, Reshape will try to connect to a Postgres database running on localhost using postgres as both username and password. See Connection options for details on how to change the connection settings.

Using during development

When adding new migrations during development, we recommend running reshape migrate but skipping reshape complete. This way, the new migrations can be iterated on by updating the migration file and running reshape abort followed by reshape migrate.

Writing migrations

Basics

Every migration consists of one or more actions. The actions will be run sequentially. Here's an example of a migration with two actions to create two tables, customers and products:

[[actions]]
type = "create_table"
name = "customers"
primary_key = ["id"]

	[[actions.columns]]
	name = "id"
	type = "INTEGER"
	generated = "ALWAYS AS IDENTITY"

[[actions]]
type = "create_table"
name = "products"
primary_key = ["sku"]

	[[actions.columns]]
	name = "sku"
	type = "TEXT"

Every action has a type. The supported types are detailed below.

Tables

Create table

The create_table action will create a new table with the specified columns, indices and constraints.

Example: create a customers table with a few columns and a primary key

[[actions]]
type = "create_table"
name = "customers"
primary_key = ["id"]

	[[actions.columns]]
	name = "id"
	type = "INTEGER"
	generated = "ALWAYS AS IDENTITY"

	[[actions.columns]]
	name = "name"
	type = "TEXT"

	# Columns default to nullable
	nullable = false 

	# default can be any valid SQL value, in this case a string literal
	default = "'PLACEHOLDER'" 

Example: create users and items tables with a foreign key between them

[[actions]]
type = "create_table"
name = "users"
primary_key = ["id"]

	[[actions.columns]]
	name = "id"
	type = "INTEGER"
	generated = "ALWAYS AS IDENTITY"

[[actions]]
type = "create_table"
name = "items"
primary_key = ["id"]

	[[actions.columns]]
	name = "id"
	type = "INTEGER"
	generated = "ALWAYS AS IDENTITY"

	[[actions.columns]]
	name = "user_id"
	type = "INTEGER"

	[[actions.foreign_keys]]
	columns = ["user_id"]
	referenced_table = "users"
	referenced_columns = ["id"]

Rename table

The rename_table action will change the name of an existing table.

Example: change name of users table to customers

[[actions]]
type = "rename_table"
table = "users"
new_name = "customers"

Remove table

The remove_table action will remove an existing table.

Example: remove users table

[[actions]]
type = "remove_table"
table = "users"

Add foreign key

The add_foreign_key action will add a foreign key between two existing tables. The migration will fail if the existing column values aren't valid references.

Example: create foreign key from items to users table

[[actions]]
type = "add_foreign_key"
table = "items"

	[actions.foreign_key]
	columns = ["user_id"]
	referenced_table = "users"
	referenced_columns = ["id"]

Remove foreign key

The remove_foreign_key action will remove an existing foreign key. The foreign key will only be removed once the migration is completed, which means that your new application must continue to adhere to the foreign key constraint.

Example: remove foreign key items_user_id_fkey from users table

[[actions]]
type = "remove_foreign_key"
table = "items"
foreign_key = "items_user_id_fkey"

Columns

Add column

The add_column action will add a new column to an existing table. You can optionally provide an up setting. This should be an SQL expression which will be run for all existing rows to backfill the new column.

Example: add a new column reference to table products

[[actions]]
type = "add_column"
table = "products"

	[actions.column]
	name = "reference"
	type = "INTEGER"
	nullable = false
	default = "10"

Example: replace an existing name column with two new columns, first_name and last_name

[[actions]]
type = "add_column"
table = "users"

# Extract the first name from the existing name column
up = "(STRING_TO_ARRAY(name, ' '))[1]"

	[actions.column]
	name = "first_name"
	type = "TEXT"


[[actions]]
type = "add_column"
table = "users"

# Extract the last name from the existing name column
up = "(STRING_TO_ARRAY(name, ' '))[2]"

	[actions.column]
	name = "last_name"
	type = "TEXT"


[[actions]]
type = "remove_column"
table = "users"
column = "name"

# Reconstruct name column by concatenating first and last name
down = "first_name || ' ' || last_name"

Example: extract nested value from unstructured JSON data column to new name column

> '{}' converts the JSON string value to TEXT up = "data['path']['to']['value'] #>> '{}'" [actions.column] name = "name" type = "TEXT"">
[[actions]]
type = "add_column"
table = "users"

# #>> '{}' converts the JSON string value to TEXT
up = "data['path']['to']['value'] #>> '{}'"

	[actions.column]
	name = "name"
	type = "TEXT"

Alter column

The alter_column action enables many different changes to an existing column, for example renaming, changing type and changing existing values.

When performing more complex changes than a rename, up and down should be provided. These should be SQL expressions which determine how to transform between the new and old version of the column. Inside those expressions, you can reference the current column value by the column name.

Example: rename last_name column on users table to family_name

[[actions]]
type = "alter_column"
table = "users"
column = "last_name"

	[actions.changes]
	name = "family_name"

Example: change the type of reference column from INTEGER to TEXT

[[actions]]
type = "alter_column"
table = "users"
column = "reference"

up = "CAST(reference AS TEXT)" # Converts from integer value to text
down = "CAST(reference AS INTEGER)" # Converts from text value to integer

	[actions.changes]
	type = "TEXT" # Previous type was 'INTEGER'

Example: increment all values of an index column by one

[[actions]]
type = "alter_column"
table = "users"
column = "index"

up = "index + 1" # Increment for new schema
down = "index - 1" # Decrement to revert for old schema

	[actions.changes]
	name = "index"

Example: make name column not nullable

[[actions]]
type = "alter_column"
table = "users"
column = "name"

# Use "N/A" for any rows that currently have a NULL name
up = "COALESCE(name, 'N/A')"

	[actions.changes]
	nullable = false

Example: change default value of created_at column to current time

[[actions]]
type = "alter_column"
table = "users"
column = "created_at"

	[actions.changes]
	default = "NOW()"

Remove column

The remove_column action will remove an existing column from a table. You can optionally provide a down setting. This should be an SQL expression which will be used to determine values for the old schema when inserting or updating rows using the new schema. The down setting must be provided when the removed column is NOT NULL or doesn't have a default value.

Any indices that cover the column will be removed.

Example: remove column name from table users

[[actions]]
type = "remove_column"
table = "users"
column = "name"

# Use a default value of "N/A" for the old schema when inserting/updating rows
down = "'N/A'"

Indices

Add index

The add_index action will add a new index to an existing table.

Example: create a users table with a unique index on the name column

[[actions]]
type = "create_table"
table = "users"
primary_key = "id"

	[[actions.columns]]
	name = "id"
	type = "INTEGER"
	generated = "ALWAYS AS IDENTITY"

	[[actions.columns]]
	name = "name"
	type = "TEXT"

[[actions]]
type = "add_index"
table = "users"

	[actions.index]
	name = "name_idx"
	columns = ["name"]

	# Defaults to false
	unique = true

Example: add GIN index to data column on products table

[[actions]]
type = "add_index"
table = "products"

	[actions.index]
	name = "data_idx"
	columns = ["data"]

	# One of: btree (default), hash, gist, spgist, gin, brin
	type = "gin"

Remove index

The remove_index action will remove an existing index. The index won't actually be removed until the migration is completed.

Example: remove the name_idx index

[[actions]]
type = "remove_index"
index = "name_idx"

Enums

Create enum

The create_enum action will create a new enum type with the specified values.

Example: add a new mood enum type with three possible values

[[actions]]
type = "create_enum"
name = "mood"
values = ["happy", "ok", "sad"]

Remove enum

The remove_enum action will remove an existing enum type. Make sure all usages of the enum has been removed before running the migration. The enum will only be removed once the migration is completed.

Example: remove the mood enum type

[[actions]]
type = "remove_enum"
enum = "mood"

Commands and options

reshape migrate

Starts a new migration, applying all migrations under migrations/ that haven't yet been applied. After the command has completed, both the old and new schema will be usable at the same time. When you have rolled out the new version of your application which uses the new schema, you should run reshape complete.

Options

See also Connection options

Option Default Description
--complete, -c false Automatically complete migration after applying it.
--dirs migrations/ Directories to search for migration files. Multiple directories can be specified using --dirs dir1 dir2 dir3.

reshape complete

Completes migrations previously started with reshape complete.

Options

See Connection options

reshape abort

Aborts any migrations which haven't yet been completed.

Options

See Connection options

reshape generate-schema-query

Generates the SQL query you need to run in your application before using the database. This command does not require a database connection. Instead it will generate the query based on the latest migration in the migrations/ directory (or the directories specified by --dirs).

The query should look something like SET search_path TO migration_1_initial_migration.

Options

Option Default Description
--dirs migrations/ Directories to search for migration files. Multiple directories can be specified using --dirs dir1 dir2 dir3.

Connection options

The options below can be used with all commands that communicate with Postgres. Use either a connection URL or specify each connection option individually.

All options can also be set using environment variables instead of flags. If a .env file exists, then variables will be automatically loaded from there.

Option Default Environment variable Description
--url DB_URL URL to your Postgres database
--host localhost DB_HOST Hostname to use when connecting to Postgres
--port 5432 DB_PORT Port which Postgres is listening on
--database postgres DB_NAME Database name
--username postgres DB_USERNAME Postgres username
--password postgres DB_PASSWORD Postgres password

License

Reshape is released under the MIT license.

Comments
  • Inserting data?

    Inserting data?

    I'll be honest. I am still very confused how I use this tool after making the migration. I am trying to remove the not-null constraint I added in the previous example,

    reshape_example> SET search_path TO migration_6_make_name_null_again;
    SET
    Time: 0.001s
    reshape_example> insert into users (name) values (null);
    null value in column "name" of relation "users" violates not-null constraint
    DETAIL:  Failing row contains (10, null, null).
    
    Time: 0.005s
    

    And the migration,

    [[actions]]
    type = "alter_column"
    table = "users"
    column = "name"
    
      [actions.changes]
      nullable = true
    

    Is this expected to work?

    opened by chiroptical 14
  • Could this be implemented in PLpgSQL instead of Rust?

    Could this be implemented in PLpgSQL instead of Rust?

    I think there are two barriers to adoption in the way this is written:

    • it has its own migration format that doesn't seem like we can seamlessly integrate with existing migrations that do arbitrary DDL queries like modify constraints and triggers
    • being Rust, we need separate build/deployment code to deploy it alongside our app

    I wonder if this is necessary? If there were PLpgSQL functions we could call to perform the various actions, then we could call those functions from our existing migration tools. The actions could be passed to PLpgSQL functions in jsonb format.

    opened by jedwards1211 7
  • Allow creating tables without primary keys

    Allow creating tables without primary keys

    Thank you for this wonderful program! Looking forward to the day I trust using it at work :)

    I am not sure if requiring primary keys is intentional, but relaxing this requirement seems to work.

    opened by aschleck 4
  • Use nullable field from changes

    Use nullable field from changes

    This text is broken up into two pieces. The first is the context for the fix (which I think is correct) and the second is a question because I am misunderstanding something. I also want to try to add some tests for this.

    Fix

    I wanted a migration that adds a NOT NULL constraint to an existing field. You can see the example file in the question section below. This didn't work. After a little print debugging I realized that the column.nullable meant the existing column which would retain its' nullability always. You could never remove or add it with the TOML. I am pretty sure this works for adding or removing a nullability constraint but I want to write some tests which I'll do soon!

    Question

    The next issue I faced is given the following migration,

    [[actions]]
    type = "alter_column"
    table = "users"
    column = "name"
    
      [actions.changes]
      nullable = false
    

    After reshape migrate, I am unable to add new rows to the schema. I am a bit out of my element here and still learning, but I tried the following,

    insert into migration_5_make_name_not_null.users (name) values ('chiroptical');
    

    which yields,

    null value in column "name" of relation "users" violates not-null constraint
    DETAIL:  Failing row contains (8, null, null).
    

    I am a bit confused here. Am I supposed to only insert data into the public schema and let the triggers handle everything else? How does this work with SET search_path TO migration_5_make_name_not_null? Maybe I need to read more postgresql documentation here and I shouldn't be asking you?

    opened by chiroptical 4
  • Use inline format parameters

    Use inline format parameters

    I am trying out your tool and compiled on my machine (Manjaro 21.2.1, Rustc 1.56.1), but I got the following errors

    error: there is no argument named `unique_def`
       --> src/migrations/alter_column.rs:176:24
        |
    176 |                 CREATE {unique_def} INDEX CONCURRENTLY IF NOT EXISTS "{new_index_name}" ON "{table}" USING {index_type} ({columns})
    

    This happened with both cargo install reshape (outside the git repository) and cargo build inside the cloned repository. I have done the world's tiniest amount of rust so this could be a lack of knowledge on my part. In this PR, I just inline the parameters that were being complained about. Please let me know if this is useful or I am just doing something wrong. I really appreciate the help. Thanks!

    opened by chiroptical 4
  • Comment regarding multi-tenancy

    Comment regarding multi-tenancy

    I stumbled over your blog https://fabianlindfors.se/blog/schema-migrations-in-postgres-using-reshape/ and unfortunately, there is no comment function (like Disqus), so I thought to comment here.

    First of all, big appreciation! Great stuff you made ;)

    But I wanted to point your attention to the fact, that in real-life systems the schema might be already used for schema-based tenant separation, and also a connection pooler (HikariCP, Pg-Bouncer, PgPool-II, Heimdall) might interfere.

    • https://blog.arkency.com/what-surprised-us-in-postgres-schema-multitenancy/
    • https://blog.arkency.com/comparison-of-approaches-to-multitenancy-in-rails-apps/

    So in order to address this, reshape would need to be aware of the tenant management and the application (more precisely the connection pooler) about the reshape-managed schemas.

    opened by MahatmaFatalError 4
  • Quote identifier names

    Quote identifier names

    Identifier names may contain characters that require them to be double-quoted. From looking at the code, it appears that they are currently not quoted. Apologies if I'm wrong about this as I'm not very familiar with Rust.

    opened by dmjef 4
  • Fix incorrect example in the README

    Fix incorrect example in the README

    Unrelated to this PR: I'm not sure if you're looking for some general feedback. Here are three things I've noticed in my not-very-serious usage of this.

    1. As far as I can tell the filenames are lexically sorted. So if you have 10 migrations (1_blah, 2_blah, ..., 10_blah) then the order Reshape will process them in is 10_blah, 1_blah, 2_blah.... I found this quite unexpected and undesirable. Not sure if this is a documentation bug (the README suggests this format) or a bug bug. Personally, I think it's worth enforcing the filename format so you can sort numerically on the number.
    2. During development I've been preferring to blow away my entire database and update migration 1_ rather than creating tons of additional migrations. I think the reason I am doing this is because I was originally creating development migrations in the same style as I make Git commits: frequently and without thought. But whereas I can rewrite history and squash everything with Git when I want to PR, I have no automatic ability to squash my development migrations into one final migration when I am ready to PR my schema. And maybe it wouldn't be so bad if it were like Git commits where few people ever look at the history, but I look at these migration files to understand the current schema so I'm incentivized to keep the files tidy.
    3. Blowing away my database has generally been fine, but I now find myself in the position of (even during development) needing tables within the same database that I keep around for a long time (these tables store elevation data and are expensive to compute) along with tables that (due to previous problem) I prefer to blow away. I am not sure if repairing a database with missing tables is even a usecase you're interested in, but if I can't doctor/repair the database back to what Reshape wants it to be this may ultimately force me off Reshape.
    opened by aschleck 3
  • After abort, generate-schema-query returns the aborted search_path

    After abort, generate-schema-query returns the aborted search_path

    $ pgcli db
    > \dn
    +--------------------------------+-------------+
    | Name                           | Owner       |
    |--------------------------------+-------------|
    | migration_5_make_name_not_null | chiroptical |
    | public                         | postgres    |
    | reshape                        | chiroptical |
    +--------------------------------+-------------+
    $ reshape migrate
    Applying 1 migrations
    
    Migrating '6_make_name_null_again':
      + Altering column "name" on "users" done
    
    ...
    $ reshape abort
    Aborting '6_make_name_null_again' done
    $ reshape generate-schema-query
    SET search_path TO migration_6_make_name_null_again
    

    However, if you look at \dn in pgcli the schema doesn't exist.

    opened by chiroptical 3
  • Fix syntax error in README

    Fix syntax error in README

    Telling Postgres to only drop an extension if it isn't installed made it sad

    (PS I only noticed this because I am trying to use your suggested migration start/abort development workflow. I think you're correct but I don't have enough practice to have feedback. Sorry for going silent on the other bug!)

    opened by aschleck 1
  • Output the failing filename when showing a parse error

    Output the failing filename when showing a parse error

    I'll admit I don't know Rust, hope this is a reasonable way to do this.

    Before:

    Error: missing field `name` for key `actions` at line 6 column 3
    

    After:

    Error: failed to parse migration file migrations/10_a.toml
    
    Caused by:
        missing field `name` for key `actions` at line 6 column 3
    
    opened by aschleck 1
  • Auto-generating migrations from models/structs

    Auto-generating migrations from models/structs

    Many ORMs provide migrations management, and include the feature of being able to generate a sample migration from the models change in the source code.

    e.g. Django models https://docs.djangoproject.com/en/4.1/ref/django-admin/#django-admin-makemigrations and add-ons like https://pypi.org/project/django-pg-zero-downtime-migrations/ or https://github.com/yandex/zero-downtime-migrations (and there are others) for zero-downtime.

    SQLAlchemy + Alembic also provides this. https://alembic.sqlalchemy.org/en/latest/autogenerate.html

    I cant find any Rust tool which provides this so far. All of the Rust ORMs which provide migration management either use SQL or a special DDL-only database agnostic interface, like https://www.sea-ql.org/sea-orm-tutorial/ch01-02-migration-cli.html or https://git.irde.st/spacekookie/barrel . SeaORM can create DDL from the models, but it cant alter them if the model changes.

    To achieve this in reshape, would require some way for other tooling to provide reshape with the models in a language neutral manner, and then reshape does a diff between current state as of latest migration, and the incoming models. The incoming language neutral models could be either a custom DSL for the database schema (TOML, maybe similar to the reshape migrations DSL?), or just real Postgres DDL which reshape parses (maybe there is a good library for this? I found two at https://github.com/search?q=parse+DDL+language%3Arust&type=repositories but havent dug into it) .

    opened by jayvdb 1
  • [FR] Do you have any thoughts about indices with GiST operators?

    [FR] Do you have any thoughts about indices with GiST operators?

    As far as I can tell reshape can generate

    CREATE INDEX exciting_index ON boundaries USING gist (epoch, name)
    

    but I am trying to build an index like the following:

    CREATE INDEX exciting_index ON boundaries USING gist (epoch, name gist_trgm_ops(siglen=256))
    

    Is this something you would be open to supporting? Or does reshape already support this? I definitely understand if it's too niche. And syntactically I am not sure how it would fit without being gross, this idea makes me feel bad:

    [[actions]]
    type = "add_index"
    table = "boundaries"
    
    	[actions.index]
    	name = "exciting_index"
    	columns = ["epoch", "name"]
    	type = "gist"
            operators = {
                "name" = "gist_trgm_ops(siglen=256)"
            }
    
    opened by aschleck 2
  • When are you planning your first production release?

    When are you planning your first production release?

    Great work

    I really like the approach you have taken and am very interested on how things progress. Would love to talk about this in more detail

    Alttaf

    opened by Alttaf 3
Releases(v0.6.1)
Owner
Fabian Lindfors
Fabian Lindfors
Command-line tool to make Rust source code entities from Postgres tables.

pg2rs Command-line tool to make Rust source code entities from Postgres tables. Generates: enums structs which can be then used like mod structs; use

Stanislav 10 May 20, 2022
Visualize your database schema

dbviz Visualize your database schema. The tool loads database schema and draws it as a graph. Usage $ dbviz -d database_name | dot -Tpng > schema.png

yunmikun2 2 Sep 4, 2022
A high-performance, distributed, schema-less, cloud native time-series database

CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

null 1.8k Dec 30, 2022
The rust client for CeresDB. CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

The rust client for CeresDB. CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

null 12 Nov 18, 2022
Manage Redshift/Postgres privileges in GitOps style written in Rust

grant-rs An open-source project that aims to manage Postgres/Redshift database roles and privileges in GitOps style, written in Rust. Home | Documenta

Duyet Le 13 Nov 23, 2022
cogo rust coroutine database driver (Mysql,Postgres,Sqlite)

cdbc Coroutine Database driver Connectivity.based on cogo High concurrency,based on coroutine No Future<'q,Output=*>,No async fn, No .await , no Poll*

co-rs 10 Nov 13, 2022
Making Postgres and Elasticsearch work together like it's 2021

Making Postgres and Elasticsearch work together like it's 2021 Readme ZomboDB brings powerful text-search and analytics features to Postgres by using

ZomboDB 4.2k Jan 2, 2023
Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres.

SDB - SignatureDB Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres

Fremantle Industries 5 Apr 26, 2022
rust-postgres support library for the r2d2 connection pool

r2d2-postgres Documentation rust-postgres support library for the r2d2 connection pool. Example use std::thread; use r2d2_postgres::{postgres::NoTls,

Steven Fackler 128 Dec 26, 2022
A Rust application that inserts Discogs data dumps into Postgres

Discogs-load A Rust application that inserts Discogs data dumps into Postgres. Discogs-load uses a simple state machine with the quick-xml Rust librar

Dylan 7 Dec 9, 2022
postgres-ical - a PostgreSQL extension that adds features related to parsing RFC-5545 « iCalendar » data from within a PostgreSQL database

postgres-ical - a PostgreSQL extension that adds features related to parsing RFC-5545 « iCalendar » data from within a PostgreSQL database

Edgar Onghena 1 Feb 23, 2022
The simplest implementation of LLM-backed vector search on Postgres.

pg_vectorize under development The simplest implementation of LLM-backed vector search on Postgres. -- initialize an existing table select vectorize.i

Tembo 5 Jul 25, 2023
Postgres Foreign Data Wrapper for Clerk.com API

Pre-requisites Postgres-15 Rust pgrx Getting Started To run the program locally, clone the repository git clone https://github.com/tembo-io/clerk_fdw.

Tembo 3 Aug 22, 2023
A Pub/Sub library for Rust backed by Postgres

Unisub Unisub is a Pub/Sub library for Rust, using Postgres as the backend. It offers a convenient way to publish and subscribe to messages across dif

Nick Rempel 12 Oct 6, 2023
Rust library and daemon for easily starting postgres databases per-test without Docker

pgtemp pgtemp is a Rust library and cli tool that allows you to easily create temporary PostgreSQL servers for testing without using Docker. The pgtem

Harry Stern 165 Mar 22, 2024
A easy-use client to influxdb

InfluxDBClient-rs A easy-use client to influxdb Overview This is an InfluxDB driver for Rust. Status This project has been able to run properly, PR is

漂流 75 Jul 22, 2022
Easy to use rust driver for arangoDB

arangors arangors is an intuitive rust client for ArangoDB, inspired by pyArango. arangors enables you to connect with ArangoDB server, access to data

fMeow 116 Jan 1, 2023
lightweight, async and zero-copy KV Store

KipDB 介绍 网络异步交互、零拷贝的轻量级KV数据库 基于PingCAP课程talent-plan 课程地址:https://github.com/pingcap/talent-plan/tree/master/courses/rust 内置多种持久化内核 HashStore: 基于哈希 Sle

Kould 34 Dec 18, 2022
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023