PgBouncer rewritten in Rust, with sharding, load balancing and failover support

Related tags

Database pgcat
Overview

PgCat

CircleCI

PgCat

Meow. PgBouncer rewritten in Rust, with sharding, load balancing and failover support.

Alpha: don't use in production just yet.

Local development

  1. Install Rust (latest stable will work great).
  2. cargo run --release (to get better benchmarks).
  3. Change the config in pgcat.toml to fit your setup (optional given next step).
  4. Install Postgres and run psql -f tests/sharding/query_routing_setup.sql

Tests

You can just PgBench to test your changes:

pgbench -i -h 127.0.0.1 -p 6432 && \
pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol simple && \
pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol extended

See sharding README for sharding logic testing.

Features

  1. Session mode.
  2. Transaction mode.
  3. COPY protocol support.
  4. Query cancellation.
  5. Round-robin load balancing of replicas.
  6. Banlist & failover
  7. Sharding!

Session mode

Each client owns its own server for the duration of the session. Commands like SET are allowed. This is identical to PgBouncer session mode.

Transaction mode

The connection is attached to the server for the duration of the transaction. SET will pollute the connection, but SET LOCAL works great. Identical to PgBouncer transaction mode.

COPY protocol

That one isn't particularly special, but good to mention that you can COPY data in and from the server using this pooler.

Query cancellation

Okay, this is just basic stuff, but we support cancelling queries. If you know the Postgres protocol, this might be relevant given than this is a transactional pooler but if you're new to Pg, don't worry about it, it works.

Round-robin load balancing

This is the novel part. PgBouncer doesn't support it and suggests we use DNS or a TCP proxy instead. We prefer to have everything as part of one package; arguably, it's easier to understand and optimize. This pooler will round-robin between multiple replicas keeping load reasonably even.

Banlist & failover

This is where it gets even more interesting. If we fail to connect to one of the replicas or it fails a health check, we add it to a ban list. No more new transactions will be served by that replica for, in our case, 60 seconds. This gives it the opportunity to recover while clients are happily served by the remaining replicas.

This decreases error rates substantially! Worth noting here that on busy systems, if the replicas are running too hot, failing over could bring even more load and tip over the remaining healthy-ish replicas. In this case, a decision should be made: either lose 1/x of your traffic or risk losing it all eventually. Ideally you overprovision your system, so you don't necessarily need to make this choice :-).

Sharding

We're implemeting Postgres' PARTITION BY HASH sharding function for BIGINT fields. This works well for tables that use BIGSERIAL primary key which I think is common enough these days. We can also add many more functions here, but this is a good start. See src/sharding.rs and tests/sharding/partition_hash_test_setup.sql for more details on the implementation.

The biggest advantage of using this sharding function is that anyone can shard the dataset using Postgres partitions while also access it for both reads and writes using this pooler. No custom obscure sharding function is needed and database sharding can be done entirely in Postgres.

To select the shard we want to talk to, we introduced special syntax:

SET SHARDING KEY TO '1234';

This sharding key will be hashed and the pooler will select a shard to use for the next transaction. If the pooler is in session mode, this sharding key has to be set as the first query on startup & cannot be changed until the client re-connects.

Missing

  1. Authentication, ehem, this proxy is letting anyone in at the moment.

Benchmarks

You can setup PgBench locally through PgCat:

pgbench -h 127.0.0.1 -p 6432 -i

Coincidenly, this uses COPY so you can test if that works.

PgBouncer

$ pgbench -i -h 127.0.0.1 -p 6432 && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol simple && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol extended
dropping old tables...
creating tables...
generating data...
100000 of 100000 tuples (100%) done (elapsed 0.01 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.
starting vacuum...end.
transaction type: 
   
    
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.089 ms
tps = 918.687098 (including connections establishing)
tps = 918.847790 (excluding connections establishing)
starting vacuum...end.
transaction type: 
    
     
scaling factor: 1
query mode: extended
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.136 ms
tps = 880.622009 (including connections establishing)
tps = 880.769550 (excluding connections establishing)

    
   

PgCat

$ pgbench -i -h 127.0.0.1 -p 6432 && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol simple && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol extended
dropping old tables...
creating tables...
generating data...
100000 of 100000 tuples (100%) done (elapsed 0.01 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.
starting vacuum...end.
transaction type: 
   
    
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.142 ms
tps = 875.645437 (including connections establishing)
tps = 875.799995 (excluding connections establishing)
starting vacuum...end.
transaction type: 
    
     
scaling factor: 1
query mode: extended
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.181 ms
tps = 846.539176 (including connections establishing)
tps = 846.713636 (excluding connections establishing)

    
   

Direct Postgres

$ pgbench -i -h 127.0.0.1 -p 5432 && pgbench -t 1000 -p 5432 -h 127.0.0.1 --protocol simple && pgbench -t 1000 -p
5432 -h 127.0.0.1 --protocol extended
Password:
dropping old tables...
creating tables...
generating data...
100000 of 100000 tuples (100%) done (elapsed 0.01 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.
Password:
starting vacuum...end.
transaction type: 
   
    
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 0.902 ms
tps = 1109.014867 (including connections establishing)
tps = 1112.318595 (excluding connections establishing)
Password:
starting vacuum...end.
transaction type: 
    
     
scaling factor: 1
query mode: extended
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 0.931 ms
tps = 1074.017747 (including connections establishing)
tps = 1077.121752 (excluding connections establishing)

    
   
Comments
  • [RFD] Query-based routing.

    [RFD] Query-based routing.

    This is more of a discussion topic then issue, but discussions are disabled, so here we are. :) Interesting (at least for me) functionality would be an ability to route queries to separate read traffic from write. Simple use case: I have one master and two replicas, all write queries, no matter the client, should go to the master and all the rest to replicas. Is this something worth considering?

    enhancement good first issue 
    opened by mskarbek 9
  • Don't send discard all when state is changed in transaction

    Don't send discard all when state is changed in transaction

    If a set statement is sent in a transaction then we shouldn't try to send the DISCARD ALL statement

    Gets rid of two message clones that are almost always hit

    opened by zainkabani 8
  • support exporting metrics like pgbouncer

    support exporting metrics like pgbouncer

    Is your feature request related to a problem? Please describe. we are using spreaker/prometheus-pgbouncer-exporter to monitor our pgbouncers, it will be great if pgcat has compatible metrics

    Describe the solution you'd like could be scrape by prometheus exporters, typically existing pgbouncer exporters

    Describe alternatives you've considered use general postgres exporter and manually craft the stat sql

    enhancement help wanted good first issue 
    opened by DeoLeung 6
  • selected wrong shard and the connection keep reporting fatal

    selected wrong shard and the connection keep reporting fatal

    Describe the bug selected wrong shard and the connection keep reporting fatal

    To Reproduce

    • create 2 shards, naming shard 0 and 1
    • in a db ide, open a session
    • SET shard TO '2';, it reports FATAL: shard 2 is more than configured 2
    • SET shard TO '0';, it reports FATAL: shard 2 is more than configured 2

    Expected behavior be able to set shard to 0 and 1 after setting a wrong one

    opened by DeoLeung 5
  • Memory leak? after unclean connection shutdown

    Memory leak? after unclean connection shutdown

    Describe the bug pgcat memory usage climbs up monotonically if we repeatedly close connections mid-transaction (Maybe due to a memory leak?)

    Screen Shot 2022-05-18 at 2 35 00 PM

    To Reproduce Run the following Ruby code and monitor pgcat RSS

    require 'pg'
    $stdout.sync = true
    
    def work
        conn = PG::connect("postgres://main_user:@pgcat:5432/main_db")
        conn.async_exec("BEGIN")
        conn.close
    rescue Exception => e
        puts "Encountered #{e}"
    end
    
    
    sleep 5
    loop { 10.times.map { Thread.new { work } }.map(&:join) }
    
    

    Repeated conn.async_exec("SELECT 1") or conn.async_exec("BEGIN") followed by conn.async_exec("COMMIT") don't cause the issue (pgcat RSS hovers around 3MB in these cases) whereas with the unclosed transaction example, I was able to see a value of 55MB RSS after 10 minutes running.

    Expected behavior RSS shouldn't grow constantly.

    bug 
    opened by drdrsh 5
  • Send DISCARD ALL even if client is not in transaction

    Send DISCARD ALL even if client is not in transaction

    We are not sending DISCARD ALL in all the situations that we should. This can leak session-specific settings/resources between clients.

    This is currently easily reproducible by repeatedly running PREPARE prepared_q (int) AS SELECT $1; in psql until an error emerges.

    I created a test to reproduce this issue and verified the fix. I am open to changing the name of the method.

    opened by drdrsh 4
  • Allow reader/writer endpoints for pools

    Allow reader/writer endpoints for pools

    Experimenting with Reader/Writer endpoints

    With this PR, connecting with postgres://sharding_user:sharding_user@localhost:6432/sharded/writer is equivalent to connecting to postgres://sharding_user:sharding_user@localhost:6432/sharded and then setting role to primary before each query. Connecting with postgres://sharding_user:sharding_user@localhost:6432/sharded/reader is equivalent to connecting to postgres://sharding_user:sharding_user@localhost:6432/sharded and then setting role to replica before each query.

    This is useful for clients that maintain their own primary/replica pools but do not have ability ban bad replicas or load balance (e.g. a vanilla Rails 6 app).

    opened by drdrsh 4
  • ERROR:  DISCARD ALL cannot run inside a transaction block when connection closed uncleanly

    ERROR: DISCARD ALL cannot run inside a transaction block when connection closed uncleanly

    Describe the bug

    I am working on a test bench to test some edge cases against pgcat. Opening a transaction and then closing the client connection causes pgcat to issue ROLLBACK; DISCARD ALL, however this behavior triggers ERROR: DISCARD ALL cannot run inside a transaction block error on the server. Might be something wrong with my setup or an issue with pgcat

    To Reproduce Put the following 3 files in the same directory and run docker compose up

    docker-compose.yml

    version: '3'
    services:
      pg:
        image: postgres:12-bullseye
        environment:
          POSTGRES_HOST_AUTH_METHOD: trust
          POSTGRES_DB: main_db
          POSTGRES_USER: main_user
    
      pgcat:
        build: https://github.com/levkk/pgcat.git#main
        restart: always
        command: ["pgcat", "/etc/pgcat/pgcat.toml"]
        links:
          - pg
        volumes:
          - "./config.toml:/etc/pgcat/pgcat.toml"
    
      app:
        image: ruby
        links:
          - pgcat
        volumes:
          - "./entrypoint.rb:/app/entrypoint.rb"
        command: bash -c "gem install pg && ruby /app/entrypoint.rb"
    

    entrypoint.rb

    require 'pg'
    $stdout.sync = true
    
    def poorly_behaved_client
        conn = PG::connect("postgres://main_user:@pgcat:5432/main_db")
        conn.async_exec("BEGIN")
        conn.async_exec("SELECT 1")
        conn.close
    rescue Exception => e
        puts "Encountered #{e}"
    end
    
    sleep 5
    loop { poorly_behaved_client }
    

    config.toml

    [general]
    host = "0.0.0.0"
    port = 5432
    pool_size = 5
    pool_mode = "transaction"
    connect_timeout = 5000
    healthcheck_timeout = 1000000
    ban_time = 1 # Seconds
    [user]
    name = "main_user"
    password = ""
    [shards]
    [shards.0]
    servers = [[ "pg", 5432, "primary" ]]
    database = "main_db"
    [query_router]
    default_role = "primary"
    query_parser_enabled = false
    primary_reads_enabled = false
    sharding_function = "pg_bigint_hash"
    

    Expected behavior No server errors.

    opened by drdrsh 4
  • Add more metrics to prometheus endpoint

    Add more metrics to prometheus endpoint

    Currently prometheus endpoint only outputs metrics related with 'stats' (those shown in a SHOW STATS admin command).

    This change:

    • Adds server metrics to prometheus endpoint.
    • Adds database metrics to prometheus endpoint.
    • Adds pools metrics to prometheus endpoint.
    • Change metrics name to have a prefix of (stats|pools|databases|servers).

    This change is not backward compatible as metric names are now prefixed. Maybe we can leave current metric names 'as is' and add prefixes only for the new stuff, but I like it more with everything prefixed.

    /cc @dat2

    opened by magec 3
  • Change sharding config to enum and move validation of configs into public functions

    Change sharding config to enum and move validation of configs into public functions

    Moves config validation to own functions to enable tools to use them

    Moves sharding config to enum

    Makes defaults public

    Make connect_timeout on pool and option which is overwritten by general connect_timeout

    opened by zainkabani 3
  • Add SHOW CLIENTS / SHOW SERVERS + Stats refactor and tests

    Add SHOW CLIENTS / SHOW SERVERS + Stats refactor and tests

    This started as an effort to add SHOW CLIENTS to allow for attribution of server work to specific application_names. What ended up happening is a big refactor for the Pgcat stats after I encountered a bug where it seemed that cl_* stats seemed to wrongfully include admin clients, I wasn't able to fix that without changing a lot of things hence, this refactor.

    The major changes here are

    • Introduction of SHOW CLIENTS and SHOW SERVERS queries, both are backed by an ArcSwap HashMap that gets updated by the stats collector every second.
    • Introduction of client_register and server_register events that mark a clear starting point for tracking stats for both client and server. We collect some information about clients/servers in that point and continue to use that information for subsequent events.
    • Using Anonymous structs in the Enum variants to attach relevant information to each event.
    • Making a distinction between server_id which is a random identifier we generate for each server connection and the process_id which is the Postgres Backend PID used for cancel query requests. Prior to this PR both concepts were one and the same.
    • Report client errors using stats system, so errors like failing to obtain a connection from a pool or a server getting banned while client is talking to it.
    • Some changes to the placement of event reporting calls
    • Integration tests for stats
    opened by drdrsh 3
  • Allow server to be offline during validation

    Allow server to be offline during validation

    If a server is offline during startup or a config reload, instead of failing, we will log the error. Then, when a client tries to connec to a particular pool, pgcat will attempt to connect to the upstream database server and obtain the server info params before passing them to the client. These server info params will then be cached so the next client will not need to wait for pgcat to connect to the upstream database again. These parameters are warmed or re-warmed during a reload.

    This resolves https://github.com/levkk/pgcat/issues/254 .

    opened by film42 2
  • Auth query

    Auth query

    Auth query feature

    This change adds a feature that allows setting auth passthrough for md5.

    Configuration

    • auth_query: An string containing a query that will be executed against every server of a pool on boot to obtain the hash of a given user. This query have to use a placeholder $1, so pgcat can replace it with the user its trying to fetch the hash from.
    • auth_query_user: The user to use for connecting to the server and executing the auth_query.
    • auth_query_password: The password to use for connecting to the server and executing the auth_query.
    • auth_query_database: The database to use for connecting to the server and executing the auth_query.

    The behavior is, at boot time, when validating server connections, a hash is fetched per server and stored there. When new server connections are created, that hash is used for creating them, if the hash could not be obtained for whatever reason, it falls back to the password set in the pool.

    Client connections are also authenticated using the obtained hash.

    Testing

    This is currently tested using the integration tests:

    • We test we can connect when no passwords are specified for pools and auth_query is set.
    • We test we can connect when auth_query is configured but failing and passwords are set.

    Considerations

    • A new crate is added to deal with postgres protocol parsing, I first did it myself and didn't like the result so thought, why reinventing the wheel?
    • Currently hash refresh is not done when reloading and no pool have changed.
    • No documentation is yet written, I want to first discuss about the change and will write later if we agree on the merge.
    • This fixes (https://github.com/levkk/pgcat/issues/253)
    opened by magec 4
  • Add more metrics to prometheus endpoint

    Add more metrics to prometheus endpoint

    Currently prometheus endpoint only outputs metrics related with 'stats' (those shown in a SHOW STATS admin command).

    This change:

    • Adds server metrics to prometheus endpoint.
    • Adds database metrics to prometheus endpoint.
    • Adds pools metrics to prometheus endpoint.
    • Change metrics name to have a prefix of (stats|pools|databases|servers).

    This change is not backward compatible as metric names are now prefixed. Maybe we can leave current metric names 'as is' and add prefixes only for the new stuff, but I like it more with everything prefixed.

    /cc @dat2

    opened by magec 0
  • Add support for client SCRAM authentication

    Add support for client SCRAM authentication

    Is your feature request related to a problem? Please describe.

    Postgres 14 switched the default password encryption and authentication method from Md5 to SCRAM-SHA-256. PgCat can authenticate to the servers using SCRAM, but it can't authenticate clients using SCRAM, only Md5. Client libraries still support Md5, but medium/long term Postgres is sure to remove that insecure authentication algorithm from libpq, so we need to add support for SCRAM for client auth as well.

    Describe the solution you'd like Add support for client-initiated auth to scram.rs and add support for it in client.rs.

    Describe alternatives you've considered There aren't any, this is a necessary change.

    Additional context #253

    opened by levkk 0
  • Servers need to be available for a successful configuration reloading

    Servers need to be available for a successful configuration reloading

    Describe the bug ConnectionPool requires all servers to be reachable to reload the config. Specifically, it fetches server information and passes it to the client statically instead of dynamically when the connection is made. This ensures that the servers are correctly configured before a configuration is made live, but it also blocks configuration reload for every server if only one server is down.

    To Reproduce Steps to reproduce the behavior:

    1. Place an incorrect IP/port into the server config.
    2. Try to reload config with kill -HUP $(pgrep pgcat)
    3. It should be blocked on reloading config, although the pooler will remain online with existing config.

    Expected behavior Good question. I think it should skip the broken server and issue an error to the log.

    Desktop (please complete the following information):

    • OS: Ubuntu
    • Version: latest main

    Additional context This can be thought of both as a feature and a bug. It's conceivable that servers that are not reachable should not be allowed into the system, but intermittent issues happen and ideally the pooler shouldn't rely on all servers to be available to do its job, e.g. in multi-tenant or heavily sharded/replicated clusters, where the failure of one shard/replica shouldn't impact the continuous ability of the pooler to do its job.

    bug enhancement 
    opened by levkk 2
  • Allow an 'auth_query' so passwords for pools can be loaded from database servers

    Allow an 'auth_query' so passwords for pools can be loaded from database servers

    Is your feature request related to a problem? Please describe.

    We currently use PgBouncer with md5 password method using auth_query so we do not need to change its configuration when password changes etc. Also it allows us not to have to set passwords in three places, application -> DB connection pooler -> Database server.

    It would be nice to have a feature like PgBouncer/Odyssey have that allows md5 authentication using passwords stored on server backends by means of a query to the target server. This way we only have to tell PgCat one password, the one used to connect to the server for looking up the password. The rest of passwords are obtained from the DB.

    Describe the solution you'd like I find Pgbouncer neat.

    In the case of PGCat It would be something like:


    auth_query string

    Enable remote user authentication.

    Whenever a new client connection is opened and MD5 auth is used, use 'auth_query' against target server (using auth_user and auth_password) to obtain user password.

    auth_query "SELECT usename, passwd FROM pg_shadow WHERE usename=$1"
    auth_user ""
    auth_password ""
    

    This is usually done using a function so you can use an unprivileged user that have access to just this table (see this)

    Disabled by default.


    Also, to ease deployment in containerized environments It would be nice to be able to overwrite auth_query_password using an environment variable like (PGCAT_AUTH_QUERY_PASSWORD) , this way, if also admin password can be overridden by an Env var, config file will be password-less which improves security and simplifies deployment in containerized environments.

    NOTE: I currently have some bandwidth to implement this.

    opened by magec 7
Releases(v0.3.0)
  • v0.3.0(Oct 8, 2022)

    Summary

    Lots of bug fixes and optimizations.

    What's Changed

    • Live reloading entire config and bug fixes by @levkk in https://github.com/levkk/pgcat/pull/84
    • Fix panic & query router bug by @levkk in https://github.com/levkk/pgcat/pull/85
    • Automatically reload config every seconds (disabled by default) by @levkk in https://github.com/levkk/pgcat/pull/86
    • Fix stats dymanic reload by @levkk in https://github.com/levkk/pgcat/pull/87
    • Support for TLS by @levkk in https://github.com/levkk/pgcat/pull/91
    • Bump activerecord from 7.0.2.2 to 7.0.3.1 in /tests/ruby by @dependabot in https://github.com/levkk/pgcat/pull/94
    • Add support for multi-database / multi-user pools by @drdrsh in https://github.com/levkk/pgcat/pull/96
    • Minor fix for some stats by @chhetripradeep in https://github.com/levkk/pgcat/pull/97
    • Add Serialize trait to configs by @drdrsh in https://github.com/levkk/pgcat/pull/99
    • Slightly more light weight health check by @drdrsh in https://github.com/levkk/pgcat/pull/100
    • Avoid ValueAfterTable when serializing configs by @drdrsh in https://github.com/levkk/pgcat/pull/101
    • Add test for config Serializer by @drdrsh in https://github.com/levkk/pgcat/pull/102
    • Send proper server parameters to clients using admin db by @drdrsh in https://github.com/levkk/pgcat/pull/103
    • Sync pgcat config for docker-compose by @chhetripradeep in https://github.com/levkk/pgcat/pull/104
    • Fix Python tests and remove CircleCI-specific path by @drdrsh in https://github.com/levkk/pgcat/pull/106
    • Add user to SHOW STATS query by @drdrsh in https://github.com/levkk/pgcat/pull/108
    • Report banned addresses as disabled by @drdrsh in https://github.com/levkk/pgcat/pull/111
    • Generate test coverage report in CircleCI by @drdrsh in https://github.com/levkk/pgcat/pull/110
    • Fix local dev by @drdrsh in https://github.com/levkk/pgcat/pull/112
    • Implementing graceful shutdown by @zainkabani in https://github.com/levkk/pgcat/pull/105
    • Prevent clients from sticking to old pools after config update by @drdrsh in https://github.com/levkk/pgcat/pull/113
    • create a prometheus exporter on a standard http port by @dat2 in https://github.com/levkk/pgcat/pull/107
    • Validates pgcat is closed after shutdown python tests by @zainkabani in https://github.com/levkk/pgcat/pull/116
    • fix docker compose port allocation for local dev by @dat2 in https://github.com/levkk/pgcat/pull/117
    • Health check delay by @zainkabani in https://github.com/levkk/pgcat/pull/118
    • Speed up CI a bit by @levkk in https://github.com/levkk/pgcat/pull/119
    • Fix debug log by @levkk in https://github.com/levkk/pgcat/pull/120
    • Make prometheus port configurable by @chhetripradeep in https://github.com/levkk/pgcat/pull/121
    • Statement timeout + replica imbalance by @levkk in https://github.com/levkk/pgcat/pull/122
    • Add cl_idle to SHOW POOLS by @drdrsh in https://github.com/levkk/pgcat/pull/124
    • Fix missing statistics by @levkk in https://github.com/levkk/pgcat/pull/125
    • Minor cleanup in admin command by @chhetripradeep in https://github.com/levkk/pgcat/pull/126
    • Add pool name and username to address object by @drdrsh in https://github.com/levkk/pgcat/pull/128
    • Minor Refactoring of re-used code and server stat reporting by @zainkabani in https://github.com/levkk/pgcat/pull/129
    • Random instance selection by @drdrsh in https://github.com/levkk/pgcat/pull/136
    • Random lb by @levkk in https://github.com/levkk/pgcat/pull/138
    • Fix incorrect routing for replicas by @levkk in https://github.com/levkk/pgcat/pull/139
    • Fix too many idle servers by @levkk in https://github.com/levkk/pgcat/pull/140
    • Really fix idle servers by @levkk in https://github.com/levkk/pgcat/pull/141
    • Avoid sending Z packet in the middle of extended protocol packet sequence if we fail to get connection form pool by @drdrsh in https://github.com/levkk/pgcat/pull/137
    • Graceful shutdown and refactor by @levkk in https://github.com/levkk/pgcat/pull/144
    • Exit with failure codes if configs are bad by @drdrsh in https://github.com/levkk/pgcat/pull/146
    • Move autoreloader to own tokio task by @zainkabani in https://github.com/levkk/pgcat/pull/148
    • Ruby integration tests by @drdrsh in https://github.com/levkk/pgcat/pull/147
    • Allow running integration tests with coverage locally by @drdrsh in https://github.com/levkk/pgcat/pull/151
    • Log Address information in connection create/drop by @drdrsh in https://github.com/levkk/pgcat/pull/154
    • Better handling extended protocol messages in the event of busy pool by @drdrsh in https://github.com/levkk/pgcat/pull/155
    • Send DISCARD ALL even if client is not in transaction by @drdrsh in https://github.com/levkk/pgcat/pull/152
    • Patch graceful shutdown bug by @zain-kabani in https://github.com/levkk/pgcat/pull/157
    • Main Thread Panic when swarmed with clients by @drdrsh in https://github.com/levkk/pgcat/pull/158
    • Adds microsecond logging and also reformats duration to include milliseconds by @zainkabani in https://github.com/levkk/pgcat/pull/156
    • Avoid reporting ProtocolSyncError when admin session disconnects by @drdrsh in https://github.com/levkk/pgcat/pull/160
    • Better logging for failure to get connection from pool by @drdrsh in https://github.com/levkk/pgcat/pull/161
    • Send signal even if process is gone by @levkk in https://github.com/levkk/pgcat/pull/162
    • Clean connection state up after protocol named prepared statement by @drdrsh in https://github.com/levkk/pgcat/pull/163
    • Add Discord link by @levkk in https://github.com/levkk/pgcat/pull/164
    • Add SHOW CLIENTS / SHOW SERVERS + Stats refactor and tests by @drdrsh in https://github.com/levkk/pgcat/pull/159
    • Report Query times by @drdrsh in https://github.com/levkk/pgcat/pull/166
    • Export pgcat objects in lib by @zainkabani in https://github.com/levkk/pgcat/pull/169
    • Update to latest library versions by @zainkabani in https://github.com/levkk/pgcat/pull/170
    • Minor refactor for configs by @zainkabani in https://github.com/levkk/pgcat/pull/172
    • Add defaults for configs by @zainkabani in https://github.com/levkk/pgcat/pull/174
    • Log failed client logins by @drdrsh in https://github.com/levkk/pgcat/pull/173
    • Don't drop connections if DB hasn't changed by @levkk in https://github.com/levkk/pgcat/pull/175
    • Fix the pool fix by @levkk in https://github.com/levkk/pgcat/pull/176
    • Set client state to idle after error by @drdrsh in https://github.com/levkk/pgcat/pull/179
    • Change sharding config to enum and move validation of configs into public functions by @zainkabani in https://github.com/levkk/pgcat/pull/178
    • Replace a few types with more developer-friendly names by @levkk in https://github.com/levkk/pgcat/pull/182
    • Fix maxwait metric by @drdrsh in https://github.com/levkk/pgcat/pull/183

    New Contributors

    • @drdrsh made their first contribution in https://github.com/levkk/pgcat/pull/96
    • @chhetripradeep made their first contribution in https://github.com/levkk/pgcat/pull/97
    • @dat2 made their first contribution in https://github.com/levkk/pgcat/pull/107
    • @zain-kabani made their first contribution in https://github.com/levkk/pgcat/pull/157

    Full Changelog: https://github.com/levkk/pgcat/compare/v0.2.1-beta1...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.1-beta1(Jun 20, 2022)

  • v0.2.0-beta1(Jun 19, 2022)

  • v0.1.0-beta3(May 21, 2022)

    Plugged a memory leak in client -> server mapping used for query cancellation that happened with clients that disconnected mid-transaction.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0-beta2(May 10, 2022)

  • v0.1.0-beta(Mar 8, 2022)

    This is a beta release. The feature set is mature enough that it can be used in low risk environments, dev, staging, pre-prod and low traffic production.

    The admin database supports monitoring with Datadog.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.0-alpha(Feb 22, 2022)

Owner
Lev Kokotov
Software engineer living in San Francisco, CA.
Lev Kokotov
A load balanced threadpool.

sangfroid A load balanced thread pool. How does it work? We maintain a binary heap of worker threads. Worker threads are ordered by the number of pend

Arindam Das 23 Jun 26, 2022
Simple and flexible queue implementation for Rust with support for multiple backends (Redis, RabbitMQ, SQS, etc.)

Omniqueue Omniqueue is an abstraction layer over queue backends for Rust. It includes support for RabbitMQ, Redis streams, and SQS out of the box. The

Svix 8 May 26, 2023
Skytable rust client support library for the bb8 connection pool

bb8-skytable Skytable rust client support library for the bb8 connection pool. Heavily based on bb8-redis Basic usage example use bb8_skytable::{

null 3 Sep 18, 2021
rust-mysql-simple support library for the r2d2 connection pool

r2d2-mysql rust-mysql-simple support library for the r2d2 connection pool.

outersky 44 Nov 1, 2022
Memcached support for the r2d2 connection pool (Rust)

Memcached support for the r2d2 connection pool (Rust)

川田 恵氏 (Kawada Keishi a.k.a megumish) 4 Jul 12, 2022
rust-postgres support library for the r2d2 connection pool

r2d2-postgres Documentation rust-postgres support library for the r2d2 connection pool. Example use std::thread; use r2d2_postgres::{postgres::NoTls,

Steven Fackler 128 Dec 26, 2022
Sharded, concurrent mini redis that support http interface implemented in rust

Rudis A mini version of redis server that provides http interface implemented in Rust. The in-memorry kv-storage is sharded and concurrent safe. Inspi

Lorenzo Cao 43 May 30, 2023
LDAP support for the r2d2 connection pool

r2d2-ldap LDAP support for the r2d2 connection pool Install Add this to your Cargo.toml: [dependencies] r2d2-ldap = "0.1.1" Basic Usage use std::threa

Aitor Ruano 2 Nov 7, 2020
Oracle support for the r2d2 connection pool

r2d2-oracle The documentation can be found on docs.rs. Oracle support for the r2d2 connection pool. This fits in between the r2d2 connection manager a

Ralph Ursprung 7 Dec 14, 2022
r2d2-couchdb: CouchDB support for the r2d2 connection pool

r2d2-couchdb: CouchDB support for the r2d2 connection pool

Pablo Aguiar 10 Dec 2, 2022
All in one (aka Aio) database with async support

All in one (aka Aio) database with async support. Based on libsql/Sqlite, bevy_reflect and tokio, includes a dead simple API to be used (no SQL needed just pure Rust). Comes with automigration.

Milen Denev 11 May 4, 2024
ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-text search.

ReefDB ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-

Sacha Arbonel 75 Jun 12, 2023
Skybase is an extremely fast, secure and reliable real-time NoSQL database with automated snapshots and SSL

Skybase The next-generation NoSQL database What is Skybase? Skybase (or SkybaseDB/SDB) is an effort to provide the best of key/value stores, document

Skybase 1.4k Dec 29, 2022
Simple and handy btrfs snapshoting tool. Supports unattended snapshots, tracking, restoring, automatic cleanup and more. Backed with SQLite.

Description Simple and handy btrfs snapshoting tool. Supports unattended snapshots, tracking, restoring, automatic cleanup and more. Backed with SQLit

Eduard Tolosa 27 Nov 22, 2022
Skytable is an extremely fast, secure and reliable real-time NoSQL database with automated snapshots and TLS

Skytable is an effort to provide the best of key/value stores, document stores and columnar databases, that is, simplicity, flexibility and queryability at scale. The name 'Skytable' exemplifies our vision to create a database that has limitless possibilities. Skytable was previously known as TerrabaseDB (and then Skybase) and is also nicknamed "STable", "Sky" and "SDB" by the community.

Skytable 1.4k Dec 29, 2022
asynchronous and synchronous interfaces and persistence implementations for your OOD architecture

OOD Persistence Asynchronous and synchronous interfaces and persistence implementations for your OOD architecture Installation Add ood_persistence = {

Dmitriy Pleshevskiy 1 Feb 15, 2022
rust_arango enables you to connect with ArangoDB server, access to database, execute AQL query, manage ArangoDB in an easy and intuitive way, both async and plain synchronous code with any HTTP ecosystem you love.

rust_arango enables you to connect with ArangoDB server, access to database, execute AQL query, manage ArangoDB in an easy and intuitive way, both async and plain synchronous code with any HTTP ecosystem you love.

Foretag 3 Mar 24, 2022
Efficient and fast querying and parsing of GTDB's data

xgt xgt is a Rust tool that enables efficient querying and parsing of the GTDB database. xgt consists of a collection of commands mirroring the GTDB A

Anicet Ebou 7 Apr 1, 2023
High-performance, lock-free local and concurrent object memory pool with automated allocation, cleanup, and verification.

Opool: Fast lock-free concurrent and local object pool Opool is a high-performance Rust library that offers a concurrent and local object pool impleme

Khashayar Fereidani 8 Jun 3, 2023