RisingWave is a cloud-native streaming database that uses SQL as the interface language.

Overview

RisingWave Logo

Slack CI codecov

RisingWave is a cloud-native streaming database that uses SQL as the interface language. It is designed to reduce the complexity and cost of building real-time applications. RisingWave consumes streaming data, performs continuous queries, and updates results dynamically. As a database system, RisingWave maintains results inside its own storage and allows users to access data efficiently.

RisingWave ingests data from sources like Apache Kafka, Apache Pulsar, Amazon Kinesis, Redpanda, and materialized CDC sources.

Learn more at Introduction to RisingWave.

Quick Start

Installation

There are two ways to install RisingWave: use a pre-built package or compile from source.

Use a Pre-built Package (Linux)

# Download the pre-built binary
wget https://github.com/singularity-data/risingwave/releases/download/v0.1.5/risingwave-v0.1.5-x86_64-unknown-linux.tar.gz
# Unzip the binary
tar xvf risingwave-v0.1.5-x86_64-unknown-linux.tar.gz
# Start RisingWave in single-binary playground mode
./risingwave playground

Compile from Source with RiseDev (Linux and macOS)

# Install Rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone the repo
git clone https://github.com/singularity-data/risingwave.git && cd risingwave
# Compile and start the playground
./risedev playground

To build from source, you need to pre-install several tools in your system. You may use ./risedev configure to configure compile settings. Please refer to Contribution and Development Guidelines for more information.

You may launch a RisingWave cluster and process streaming data in a distributed manner, or enable other features like metrics collection and data persistence. Please refer to Contribution and Development Guidelines for more information.

Your First Query

To connect to the RisingWave server, you will need to install PostgreSQL shell (psql) in advance.

# Use psql to connect RisingWave cluster
psql -h localhost -p 4566
/* create a table */
create table t1(v1 int not null);

/* create a materialized view based on the previous table */
create materialized view mv1 as select sum(v1) as sum_v1 from t1;

/* insert some data into the source table */
insert into t1 values (1), (2), (3);

/* (optional) ensure the materialized view has been updated */
flush;

/* the materialized view should reflect the changes in source table */
select * from mv1;

If everything works correctly, you should see

 sum_v1
--------
      6
(1 row)

in the terminal.

Connecting to an External Source

Please refer to getting started guide for more information.

Documentation

To learn about how to use RisingWave, refer to RisingWave docs. To learn about how we design and implement RisingWave, refer to RisingWave developer docs.

License

RisingWave is under the Apache License 2.0. Please refer to LICENSE for more information.

Contributing

Thanks for your interest in contributing to the project! Please refer to Contribution and Development Guidelines for more information.

Comments
  • storage: invalid iter when running e2e

    storage: invalid iter when running e2e

    thread 'tokio-runtime-worker' panicked at 'invalid iter', storage/src/hummock/sstable/sstable_iterator.rs:95:14 stack backtrace:

    https://github.com/singularity-data/risingwave-dev/runs/5462156952?check_suite_focus=true

    type/bug 
    opened by skyzh 34
  • refactor(storage): add table_id parameter in state store write and sync method for future partial checkpoint support

    refactor(storage): add table_id parameter in state store write and sync method for future partial checkpoint support

    What's changed and what's your intention?

    This PR is for supporting partial checkpoint proposed in #1157.

    In this PR, we add a table_id parameter in ingest_batch, sync and wait_epoch method of trait StateStore so that we are able to keep track of where the kvs are written from. Besides, we group the write batches by their table_id in shared_buffer_uploader and enable syncing a subset of tables in an epoch.

    Such concept of table is introduced for finer-grained tracking about where the kvs are written from, so that we are able to support partial checkpoint, vertical grouping and shared arrangement in the future.

    A storage table id is usually the operator id, or also the table id of the relational table. Currently the table_id is not assigned yet, and all table_ids are GLOBAL_STORAGE_TABLE_ID for the time being. In future development we will assign table_id according to operator id.

    Some unit tests and related structs are modified accordingly.

    Checklist

    • [x] I have written necessary docs and comments
    • [x] I have added necessary unit tests and integration tests

    Refer to a related PR or issue link (optional)

    #1157

    type/refactor 
    opened by wenym1 29
  • bug(debug profile): segfault/EXC_BAD_ACCESS during backtrace capture

    bug(debug profile): segfault/EXC_BAD_ACCESS during backtrace capture

    Describe the bug

    When running playground on macOS using latest main (first bad commit db6691ba142af74544d208afcd3e47b809b00e00), the following sql commands leads to a server crash with segfault/EXC_BAD_ACCESS.

    It works as expected in cluster mode (./risedev d) rather than playground.

    To Reproduce

    CREATE TABLE t(a int, b int);
    CREATE VIEW v AS SELECT * FROM t;
    DROP TABLE t;
    

    Expected behavior

    Before that commit we were able to see the expected error:

    ERROR:  QueryError: Permission denied: PermissionDenied: Fail to delete table `t` because 1 other relation(s) depend on it
    

    Additional context

    console warnings before segfault (also there on last good commit, may unrelated):

    2022-11-04T15:37:18.08988+08:00  WARN risingwave_storage::hummock::state_store: sealing invalid epoch    
    2022-11-04T15:37:18.090268+08:00  WARN risingwave_storage::hummock::state_store: syncing invalid epoch    
    

    backtrace from lldb:

    * thread #7, name = 'risingwave-main', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
      * frame #0: 0x000000019f39685c libunwind.dylib`libunwind::CFI_Parser<libunwind::LocalAddressSpace>::parseFDEInstructions(libunwind::LocalAddressSpace&, libunwind::CFI_Parser<libunwind::LocalAddressSpace>::FDE_Info const&, libunwind::CFI_Parser<libunwind::LocalAddressSpace>::CIE_Info const&, unsigned long, int, libunwind::CFI_Parser<libunwind::LocalAddressSpace>::PrologInfo*) + 204
        frame #1: 0x000000019f396710 libunwind.dylib`libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_arm64>::getInfoFromFdeCie(libunwind::CFI_Parser<libunwind::LocalAddressSpace>::FDE_Info const&, libunwind::CFI_Parser<libunwind::LocalAddressSpace>::CIE_Info const&, unsigned long, unsigned long) + 100
        frame #2: 0x000000019f3963e8 libunwind.dylib`libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_arm64>::getInfoFromDwarfSection(unsigned long, libunwind::UnwindInfoSections const&, unsigned int) + 184
        frame #3: 0x000000019f3962a0 libunwind.dylib`libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_arm64>::setInfoBasedOnIPRegister(bool) + 1012
        frame #4: 0x000000019f398788 libunwind.dylib`libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_arm64>::step() + 696
        frame #5: 0x000000019f39b138 libunwind.dylib`_Unwind_Backtrace + 352
        frame #6: 0x0000000109081e40 risingwave`std::backtrace::Backtrace::create::h908375f7f84cb508 [inlined] std::backtrace_rs::backtrace::libunwind::trace::h471a59e08ff9e5dc at mod.rs:66:5 [opt]
        frame #7: 0x0000000109081e30 risingwave`std::backtrace::Backtrace::create::h908375f7f84cb508 [inlined] std::backtrace_rs::backtrace::trace_unsynchronized::h4e694232d85e2708 at mod.rs:66:5 [opt]
        frame #8: 0x0000000109081e24 risingwave`std::backtrace::Backtrace::create::h908375f7f84cb508 at backtrace.rs:333:13 [opt]
        frame #9: 0x00000001060d8454 risingwave`_$LT$risingwave_meta..error..MetaError$u20$as$u20$core..convert..From$LT$risingwave_meta..error..MetaErrorInner$GT$$GT$::from::hb4b62fbc8685e728(inner=<unavailable>) at error.rs:66:33
        frame #10: 0x0000000105e7e1d8 risingwave`_$LT$T$u20$as$u20$core..convert..Into$LT$U$GT$$GT$::into::h7837bc8fb77e8181(self=<unavailable>) at mod.rs:726:9
        frame #11: 0x00000001060d8830 risingwave`risingwave_meta::error::MetaError::permission_denied::h6592a4f64415a283(s=<unavailable>) at error.rs:96:9
        frame #12: 0x00000001064bb43c risingwave`risingwave_meta::manager::catalog::CatalogManager$LT$S$GT$::drop_materialized_source::_$u7b$$u7b$closure$u7d$$u7d$::h36956825a3496ee1((null)=ResumeTy @ 0x0000000170c3e2b0) at mod.rs:1012:36
    (... more callers omitted ...)
    
    type/bug 
    opened by xiangjinwu 28
  • Avoid shared buffer thrashing.

    Avoid shared buffer thrashing.

    Heavy streaming tpch will cause high throughput (more than 600~700MB/s) to write to shared buffer while the sync operation would normally have a way less throughput to pump that out, causing shared buffer size to grow and swap. image The example shows that the accumulation of shared buffer has a ~400MB/s input and ~200MB/s output, for 2 minutes, that would be 24G memory usage and cause severe page swap as well as database thrashing.

    twocode@twocode-pc:/mnt/c/Users/dell$ sar -W 1 1000
    21:46:12     pswpin/s pswpout/s
    21:46:56         9.00  11329.00
    21:46:57         0.00      0.00
    21:46:58         0.00  45607.00
    21:46:59         8.00      0.00
    21:47:00         0.00   1024.00
    21:47:01         7.00  28672.00
    21:47:02         6.00  50476.00
    21:47:03         3.00  67409.00
    21:47:04        22.00  79243.00
    21:47:05         4.00  45165.00
    
    good first issue 
    opened by twocode 24
  • feat: Implement `create table as  select`

    feat: Implement `create table as select`

    Is your feature request related to a problem? Please describe.

    Implement create table as select statement to support duckdb test.

    Describe the solution you'd like

    No response

    Describe alternatives you've considered

    No response

    Additional context

    No response

    type/feature 
    opened by liurenjie1024 23
  • batch: Push limit to table scan.

    batch: Push limit to table scan.

    For following sql:

    select * from t order by a limit 10000, 10
    

    And we assume that t is ordered by a in each patition, we should generate following plan:

       TopN(offset = 10000, limit = 10)
              |
        MergeSortExchange
                |
        TableScan(limit = 10010)
    

    This way we don't need to pull the first 10000 rows into executor, which can avoid encoding/decoding of those rows. But the storage layer need to provide api for those skipping.

    type/feature component/batch component/optimizer component/storage needs-investigation 
    opened by liurenjie1024 23
  • refactor(batch): Use futures-async-stream to implement values executor

    refactor(batch): Use futures-async-stream to implement values executor

    What's changed and what's your intention?

    Implement Value using Executor2 trait.

    Please explain IN DETAIL what the changes are in this PR and why they are needed:

    • Summarize your change (mandatory)
    • How does this PR work? Need a brief introduction for the changed logic (optional)
    • Describe clearly one logical change and avoid lazy messages (optional)
    • Describe any limitations of the current code (optional)

    Checklist

    • [x] I have written necessary docs and comments
    • [x] I have added necessary unit tests and integration tests

    Refer to a related PR or issue link (optional)

    close https://github.com/singularity-data/risingwave/issues/1946

    opened by D2Lark 23
  • feat(now): implement `now()` in batch

    feat(now): implement `now()` in batch

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    Enable batch system to evaluate NOW() as a function instead of an operator. result before this PR:

    dev=> select now();
    ERROR:  QueryError: internal error: Failed to build executor: Expr error: Unsupported function: Ok(Now)
    

    result after this PR: test result:

    dev=> select now();
              ?column?          
    ----------------------------
     2023-01-03 08:01:52.304273
    (1 row)
    
    dev=> create table t (v1 int);
    CREATE_TABLE
    dev=> create materialized view mv as select now() as nw from t where v1 >= 3;
    ERROR:  QueryError: internal error: Rpc error: gRPC error (Internal error): Expression error: Expected session timestamp bound into Now
    dev=> create table tt(v1 int, t1 timestamp); 
    CREATE_TABLE
    dev=> create materialized view mv as select * from tt where tt.t1 > now() - INTERVAL '1 hour';
    CREATE_MATERIALIZED_VIEW
    

    We insert a session timestamp into arguments of NOW(), which will only be used in batch system.

    Checklist

    • [x] I have written necessary rustdoc comments
    • [x] All checks passed in ./risedev check (or alias, ./risedev c)

    Refer to a related PR or issue link (optional)

    #7139

    type/feature mergify/can-merge 
    opened by soundOfDestiny 22
  • feat(hummock/meta): Reliable task cancellation via heartbeats + external task cancellation gRPC

    feat(hummock/meta): Reliable task cancellation via heartbeats + external task cancellation gRPC

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    Cancel compaction tasks and trigger new ones to its compaction group if no progress is made in num_blocks_completed or num_ssts_uploaded after a certain timeout.

    Checklist

    • [x] I have written necessary rustdoc comments
    • [x] I have added necessary unit tests and integration tests
    • [x] All checks passed in ./risedev check (or alias, ./risedev c)

    Refer to a related PR or issue link (optional)

    RFC: https://github.com/singularity-data/risingwave/issues/4457 Fixes: https://github.com/singularity-data/risingwave/issues/3677, https://github.com/singularity-data/risingwave/issues/4511, https://github.com/risingwavelabs/risingwave/issues/3677

    type/feature mergify/can-merge 
    opened by jon-chuang 22
  • feat(CI): add e2e test of extended query mode in CI

    feat(CI): add e2e test of extended query mode in CI

    Is your feature request related to a problem? Please describe. The extended query mode now already can run most of our original e2e test. We can add it in the ci.

    Describe the solution you'd like

    1. add e2e_test_extended in e2e_test folder, (e2e_test/e2e_test_extended)

      The e2e_test_extended is copy of e2e_test. The difference is: due to our extended query mode can't support {array, struct, interval, Decimal::Infinity} type now, I ignore some test case in e2e_test_extended by name these test case '.slt.part.un' instead of '.slt.part'. (When we support more type, we will reuse these test case. When e2e_test_extended support all test case, we can directly use e2e_test.)

    2. add new CI script in ci/scripts

    • e2e-source-test-extended.sh
    • e2e-test-parallel-extended.sh
    • e2e-test-extended.sh
    type/feature component/ci 
    opened by ZENOTME 21
  • feat(meta): implement create sink

    feat(meta): implement create sink

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    This PR should implement CREATE SINK feature.

    Please explain IN DETAIL what the changes are in this PR and why they are needed:

    • Frontend now produces a StreamFragmentGraph which consists of an upstream ChainNode and a downstream SinkNode (src/frontend/src/handler/create_sink.rs).
    • Frontend implements methods to add and delete sink to/from SchemaCatalog
    • Meta receives the CreateSinkRequest from frontend. Dependent relations are checked + SinkManager is initialized.
    • Via the SinkManager a create sink request is send to the Compute node.
    • Compute node can add/remove Sink to/from MemSinkManager (located in src/sink repository).

    ./e2e_test/sink/basic_test.slt is runnable, however the only things that currently happens is that sink is added/removed to from the respective nodes and the communication between frontend/meta/compute. Basically the framework is there, but the actual implementation of create sink is still lacking.

    Crucial missing functionalities:

    • Meta: Actual actors need to be generated using ActorGraphBuilder and "connected" to the upstream materialized view.
    • Compute: SinkExecutorBuilder needs to be called and executed.

    Checklist

    • [x] I have written necessary rustdoc comments
    • [x] I have added necessary unit tests and integration tests
    • [x] All checks passed in ./risedev check (or alias, ./risedev c)

    Documentation

    If your pull request contains user-facing changes, please specify the types of the changes, and create a release note. Otherwise, please feel free to remove this section.

    Types of user-facing changes

    Please keep the types that apply to your changes, and remove those that do not apply.

    • Connector (sources & sinks)

    Release note

    Please create a release note for your changes. In the release note, focus on the impact on users, and mention the environment or conditions where the impact may occur.

    Refer to a related PR or issue link (optional)

    type/feature mergify/can-merge 
    opened by nanderstabel 21
  • Discussion: allow `,` at the end in `With` clause

    Discussion: allow `,` at the end in `With` clause

    dev=> create table t1 (v1 int, v2 int);
    CREATE_TABLE
    dev=> create table t2 (v1 int, v2 int,);
    CREATE_TABLE
    
    dev=> create source s1 (v1 int, v2 int,) with ( connector = 'datagen') ROW FORMAT JSON;
    CREATE_SOURCE
    dev=> create source s1 (v1 int, v2 int,) with ( connector = 'datagen',) ROW FORMAT JSON;
    ERROR:  QueryError: sql parser error: Expected identifier, found: )
    

    Seems it does not hurt?

    opened by lmatz 0
  • refactor(meta): unify `create_table` with `create_stream_job`

    refactor(meta): unify `create_table` with `create_stream_job`

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    • Remove the create_table_inner and unify it into create_stream_job.
    • Extract a visit_fragment utility and use it for filling source IDs / streaming job IDs.

    Checklist

    • [x] I have written necessary rustdoc comments
    • [x] I have added necessary unit tests and integration tests
    • [x] All checks passed in ./risedev check (or alias, ./risedev c)

    Refer to a related PR or issue link (optional)

    • #6903
    type/refactor 
    opened by BugenZhao 0
  • feat(kafka): option of setting max number of messages

    feat(kafka): option of setting max number of messages

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    As per the title, it will be used for Kafka performance testing setting as discussed with @huangjw806, who will use this to control how much time a query runs while all the query reads from the same kafka topic (one by one)

    This is only used internally, so it will not go into the Doc.

    Checklist

    • [x] I have written necessary rustdoc comments
    • [x] I have added necessary unit tests and integration tests
    • [x] All checks passed in ./risedev check (or alias, ./risedev c)
    type/feature 
    opened by lmatz 2
  • fix: join actor when dropping

    fix: join actor when dropping

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    Ensure actor task is completely dropped after calling drop_all_actors, see https://github.com/risingwavelabs/risingwave/issues/7208#issuecomment-1375060881. Also remove unused method wait_all.

    Checklist

    ~~- [ ] I have written necessary rustdoc comments~~ ~~- [ ] I have added necessary unit tests and integration tests~~

    • [x] All checks passed in ./risedev check (or alias, ./risedev c)

    Documentation

    If your pull request contains user-facing changes, please specify the types of the changes, and create a release note. Otherwise, please feel free to remove this section.

    Types of user-facing changes

    Please keep the types that apply to your changes, and remove those that do not apply.

    • Installation and deployment
    • Connector (sources & sinks)
    • SQL commands, functions, and operators
    • RisingWave cluster configuration changes
    • Other (please specify in the release note below)

    Release note

    Please create a release note for your changes. In the release note, focus on the impact on users, and mention the environment or conditions where the impact may occur.

    Refer to a related PR or issue link (optional)

    fix #7208

    type/fix 
    opened by zwang28 2
  • feat(metrics): add total keys count in AGG

    feat(metrics): add total keys count in AGG

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    As per the title

    SCR-20230109-j8u

    Checklist

    • [x] I have written necessary rustdoc comments
    • [x] I have added necessary unit tests and integration tests
    • [x] All checks passed in ./risedev check (or alias, ./risedev c)

    Refer to a related PR or issue link (optional)

    type/feature 
    opened by lmatz 3
  • chore(catalog): stub sqlancer tables

    chore(catalog): stub sqlancer tables

    I hereby agree to the terms of the Singularity Data, Inc. Contributor License Agreement.

    What's changed and what's your intention?

    WIP

    This section will be used as the commit message. Please do not leave this empty!

    Please explain IN DETAIL what the changes are in this PR and why they are needed:

    • Summarize your change (mandatory)
    • How does this PR work? Need a brief introduction for the changed logic (optional)
    • Describe clearly one logical change and avoid lazy messages (optional)
    • Describe any limitations of the current code (optional)

    Checklist

    • [ ] I have written necessary rustdoc comments
    • [ ] I have added necessary unit tests and integration tests
    • [ ] All checks passed in ./risedev check (or alias, ./risedev c)

    Documentation

    If your pull request contains user-facing changes, please specify the types of the changes, and create a release note. Otherwise, please feel free to remove this section.

    Types of user-facing changes

    Please keep the types that apply to your changes, and remove those that do not apply.

    • Installation and deployment
    • Connector (sources & sinks)
    • SQL commands, functions, and operators
    • RisingWave cluster configuration changes
    • Other (please specify in the release note below)

    Release note

    Please create a release note for your changes. In the release note, focus on the impact on users, and mention the environment or conditions where the impact may occur.

    Refer to a related PR or issue link (optional)

    type/chore 
    opened by kwannoel 0
Releases(v0.1.15)
  • v0.1.15(Jan 4, 2023)

    For installation and running instructions, see Get started.

    Main changes

    Installation and deployment

    • Parallelism and available memory of compute nodes are now command-line arguments and removed from the configuration file. #6767
    • The default barrier interval is set to 1 second. #6553
    • Adds support for meta store backup and recovery. #6737

    SQL features

    • Adds support for SHOW CREATE MATERIALIZED VIEW and SHOW CREATE VIEW to show how materialized and non-materialized views are defined. #6921
    • Adds support for CREATE TABLE IF NOT EXISTS. #6643
    • A sink can be created from a SELECT query. #6648
    • Adds support for struct casting and comparison. #6552
    • Adds pg_catalog views and system functions. #6982
    • Adds support for CREATE TABLE AS. #6798
    • Ads the initial support for batch query on Kafka source. #6474
    • Adds support for SET QUERY_EPOCH to query historical data based on meta backup. #6840

    Connectors

    • Improves the handling of schema errors for Avro and Protobuf data. #6821
    • Adds two options to the datagen connector to make it possible to generate increasing timestamp values. #6591

    Observability

    • Adds metrics for the backup manager in Grafana. #6898
    • RisingWave Dashboard can now fetch data from Prometheus and visualize it in charts. #6602

    Full Changelog: https://github.com/risingwavelabs/risingwave/compare/v0.1.14...v0.1.15

    Source code(tar.gz)
    Source code(zip)
    risingwave-v0.1.15-x86_64-unknown-linux.tar.gz(436.26 MB)
  • v0.1.14(Dec 1, 2022)

    For installation and running instructions, see Get started.

    Main changes

    SQL features

    • PRIMARY KEY constraint checks can be performed on materialized sources and tables but not on non-materialized sources. For tables or materialized sources that enabled PRIMARY KEY constraints, if you insert data to an existing key, the new data will overwrite the old data. #6320 #6435
    • Adds support for timestamp with time zone data type. You can use this data type in time window functions, and convert between it and timestamp (without time zone). #5855 #5910 #5968
    • Adds support for UNION and UNION ALL operators. #6363 #6397
    • Implements the rank() function to support a different mode of Top-N queries. #6383
    • Adds support for logical views (CREATE VIEW). #6023
    • Adds the data_trunc() function. #6365
    • Adds the system catalog schema. #6227
    • Displays error messages when users enter conflicting or redundant command options. #5933

    Connectors

    • Adds support for the Maxwell Change Data Capture (CDC) format. #6057
    • Protobuf schema files can be loaded from Web locations in s3://, http://, or https:// formats. #6114 #5964
    • Adds support for Confluent Schema Registry for Kafka data in Avro and Protobuf formats. #6289
    • Adds two options to the Kinesis connector. Users can specify the startup mode and optionally the sequence number to start with. #6317

    Full Changelog: https://github.com/risingwavelabs/risingwave/compare/v0.1.13...v0.1.14

    Source code(tar.gz)
    Source code(zip)
    risingwave-v0.1.14-x86_64-unknown-linux.tar.gz(388.94 MB)
Owner
Singularity Data
Building the next-generation streaming database in the cloud.
Singularity Data
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023
A high-performance, distributed, schema-less, cloud native time-series database

CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

null 1.8k Dec 30, 2022
The rust client for CeresDB. CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

The rust client for CeresDB. CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

null 12 Nov 18, 2022
Rust client for Timeplus Proton, a fast and lightweight streaming SQL engine

Rust Client for Timeplus Proton Rust client for Timeplus Proton. Proton is a streaming SQL engine, a fast and lightweight alternative to Apache Flink,

Timeplus 4 Feb 27, 2024
🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, SQLite, and MSSQL.

SQLx ?? The Rust SQL Toolkit Install | Usage | Docs Built with ❤️ by The LaunchBadge team SQLx is an async, pure Rust† SQL crate featuring compile-tim

launchbadge 7.6k Dec 31, 2022
A Rust SQL query builder with a pleasant fluent API closely imitating actual SQL

Scooby An SQL query builder with a pleasant fluent API closely imitating actual SQL. Meant to comfortably build dynamic queries with a little bit of s

Aleksei Voronov 100 Nov 11, 2022
Gh-sql - Query GitHub Projects (beta) with SQL

gh-sql: Query GitHub Projects (beta) with SQL Installation gh extension install KOBA789/gh-sql Features SELECT items DELETE items UPDATE item fields

Hidekazu Kobayashi 108 Dec 7, 2022
SQL validator tool for BigQuery standard SQL.

bqvalid What bqvalid does bqvalid is the SQL validator tool for BigQuery standard SQL. bqvalid fails with error message if there's the expression that

null 10 Dec 25, 2022
tectonicdb is a fast, highly compressed standalone database and streaming protocol for order book ticks.

tectonicdb crate docs.rs crate.io tectonicdb tdb-core tdb-server-core tdb-cli tectonicdb is a fast, highly compressed standalone database and streamin

Ricky Han 525 Dec 23, 2022
FeOphant - A SQL database server written in Rust and inspired by PostreSQL.

A PostgreSQL inspired SQL database written in Rust.

Christopher Hotchkiss 27 Dec 7, 2022
GlueSQL is a SQL database library written in Rust

GlueSQL is a SQL database library written in Rust. It provides a parser (sqlparser-rs), execution layer, and optional storage (sled) packaged into a single library.

GlueSQL 2.1k Jan 8, 2023
Distributed SQL database in Rust, written as a learning project

toyDB Distributed SQL database in Rust, written as a learning project. Most components are built from scratch, including: Raft-based distributed conse

Erik Grinaker 4.6k Jan 8, 2023
Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres.

SDB - SignatureDB Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres

Fremantle Industries 5 Apr 26, 2022
SQL database to read and write "discord"

GlueSQL Discord Storage After discussing how CI testing will be managed, we plan to move it upstream. Precautions for use discord ToS https://discord.

Jiseok CHOI 9 Feb 28, 2023
ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-text search.

ReefDB ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-

Sacha Arbonel 75 Jun 12, 2023
A small rust database that uses json in memory.

Rust Small Database (RSDB) RSDB is a small library for creating a query-able database that is encoded with json. The library is well tested (~96.30% c

Kace Cottam 2 Jan 4, 2022
A Toy Query Engine & SQL interface

Naive Query Engine (Toy for Learning) ?? This is a Query Engine which support SQL interface. And it is only a Toy for learn query engine only. You can

谭巍 45 Dec 21, 2022
influxdb provides an asynchronous Rust interface to an InfluxDB database.

influxdb influxdb provides an asynchronous Rust interface to an InfluxDB database. This crate supports insertion of strings already in the InfluxDB Li

null 9 Feb 16, 2021
An object-relational in-memory cache, supports queries with an SQL-like query language.

qlcache An object-relational in-memory cache, supports queries with an SQL-like query language. Warning This is a rather low-level library, and only p

null 3 Nov 14, 2021