A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, written in Rust

Overview

Datafuse

Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture

Datafuse is a Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture written in Rust, inspired by ClickHouse and powered by arrow-rs, built to make it easy to power the Data Cloud.

Stargazers over time

Principles

  • Fearless

    • No data races, No unsafe, Minimize unhandled errors
  • High Performance

    • Everything is Parallelism
  • High Scalability

    • Everything is Distributed
  • High Reliability

    • Datafuse primary design goal is reliability

Architecture

Datafuse Architecture

Performance

  • Memory SIMD-Vector processing performance only
  • Dataset: 100,000,000,000 (100 Billion)
  • Hardware: AMD Ryzen 7 PRO 4750U, 8 CPU Cores, 16 Threads
  • Rust: rustc 1.55.0-nightly (868c702d0 2021-06-30)
  • Build with Link-time Optimization and Using CPU Specific Instructions
  • ClickHouse server version 21.4.6 revision 54447
Query FuseQuery (v0.4.40-nightly) ClickHouse (v21.4.6)
SELECT avg(number) FROM numbers_mt(100000000000) 4.35 s.
(22.97 billion rows/s., 183.91 GB/s.)
×1.4 slow, (6.04 s.)
(16.57 billion rows/s., 132.52 GB/s.)
SELECT sum(number) FROM numbers_mt(100000000000) 4.20 s.
(23.79 billion rows/s., 190.50 GB/s.)
×1.4 slow, (5.90 s.)
(16.95 billion rows/s., 135.62 GB/s.)
SELECT min(number) FROM numbers_mt(100000000000) 4.92 s.
(20.31 billion rows/s., 162.64 GB/s.)
×2.7 slow, (13.05 s.)
(7.66 billion rows/s., 61.26 GB/s.)
SELECT max(number) FROM numbers_mt(100000000000) 4.77 s.
(20.95 billion rows/s., 167.78 GB/s.)
×3.0 slow, (14.07 s.)
(7.11 billion rows/s., 56.86 GB/s.)
SELECT count(number) FROM numbers_mt(100000000000) 2.91 s.
(34.33 billion rows/s., 274.90 GB/s.)
×1.3 slow, (3.71 s.)
(26.93 billion rows/s., 215.43 GB/s.)
SELECT sum(number+number+number) FROM numbers_mt(100000000000) 19.83 s.
(5.04 billion rows/s., 40.37 GB/s.)
×12.1 slow, (233.71 s.)
(427.87 million rows/s., 3.42 GB/s.)
SELECT sum(number) / count(number) FROM numbers_mt(100000000000) 3.90 s.
(25.62 billion rows/s., 205.13 GB/s.)
×2.5 slow, (9.70 s.)
(10.31 billion rows/s., 82.52 GB/s.)
SELECT sum(number) / count(number), max(number), min(number) FROM numbers_mt(100000000000) 8.28 s.
(12.07 billion rows/s., 96.66 GB/s.)
×4.0 slow, (32.87 s.)
(3.04 billion rows/s., 24.34 GB/s.)
SELECT number FROM numbers_mt(10000000000) ORDER BY number DESC LIMIT 100 4.80 s.
(2.08 billion rows/s., 16.67 GB/s.)
×2.9 slow, (13.95 s.)
(716.62 million rows/s., 5.73 GB/s.)
SELECT max(number), sum(number) FROM numbers_mt(1000000000) GROUP BY sipHash64(number % 3), sipHash64(number % 4) 14.84 s.
(67.38 million rows/s., 539.51 MB/s.)
×1.5 fast, (10.24 s.)
(97.65 million rows/s., 781.23 MB/s.)

Note:

  • ClickHouse system.numbers_mt is 16-way parallelism processing, gist
  • FuseQuery system.numbers_mt is 16-way parallelism processing, gist

Status

General

  • SQL Parser
  • Query Planner
  • Query Optimizer
  • Predicate Push Down
  • Limit Push Down
  • Projection Push Down
  • Type coercion
  • Parallel Query Execution
  • Distributed Query Execution
  • Shuffle Hash GroupBy
  • Merge-Sort OrderBy
  • Joins (WIP)

SQL Support

  • Projection
  • Filter (WHERE)
  • Limit
  • Aggregate Functions
  • Scalar Functions
  • UDF Functions
  • SubQueries
  • Sorting
  • Joins (WIP)
  • Window (TODO)

Getting Started

Roadmap

Datafuse is currently in Alpha and is not ready to be used in production, Roadmap 2021

Contributing

License

Datafuse is licensed under Apache 2.0.

Comments
  • [store] refactor: rename store to dfs

    [store] refactor: rename store to dfs

    I hereby agree to the terms of the CLA available at: https://databend.rs/policies/cla/

    Summary

    [store] refactor: rename store to dfs

    Changelog

    • Improvement

    Related Issues

    opened by drmingdrmer 387
  • Rename trait type names from I$Name to $Name

    Rename trait type names from I$Name to $Name

    I hereby agree to the terms of the CLA available at: https://datafuse.rs/policies/cla/

    Summary

    Rename trait type names from I$Name to $Name

    Changelog

    • Renames :

      • ITable to Table
      • IDatabase to Database
    • Removes IDataSource, use struct DataSource directly

    • And relevant code

    Related Issues

    Fixes #727

    Test Plan

    No extra ut/stateless_test

    opened by dantengsky 127
  • add toStartOfWeek

    add toStartOfWeek

    I hereby agree to the terms of the CLA available at: https://databend.rs/policies/cla/

    Summary

    Summary about this PR

    Changelog

    • Improvement

    Related Issues

    #853

    Test Plan

    Unit Tests ok

    Stateless Tests ok

    opened by dust1 99
  • ISSUE-2039: rm param

    ISSUE-2039: rm param "database" from get_table_by_id

    I hereby agree to the terms of the CLA available at: https://databend.rs/policies/cla/

    Summary

    • signature of Catalog::get_table_by_id changed to
        fn get_table_by_id(
            &self,
            table_id: MetaId,
            table_version: Option<MetaVersion>,
        ) -> Result<Arc<TableMeta>>;
    

    the "database_name" parameter has been removed.

    NOTE:

    1. add database_name and table_name to struct common/meta/types/Table
    2. added annotation #[allow(clippy::large_enum_variant)] to pub enum AppliedState to disable the warning variant is 400 bytes

    Changelog

    • Improvement
    • Not for changelog (changelog entry is not required)

    Related Issues

    Fixes #2039

    Test Plan

    Unit Tests

    Stateless Tests

    opened by dantengsky 70
  • [query/server/http] Add /v1/query.

    [query/server/http] Add /v1/query.

    I hereby agree to the terms of the CLA available at: https://databend.rs/policies/cla/

    Summary

    Add new endpoint /v1/query to support sql.

    handler usage

    1. post query with JSON with format HttpQueryRequest, and return JSON with format QueryResponse
    2. Pagination with long polling.
    3. result data is embedded in QueryResponse.
    4. return the following 4 URIs, the client should rely on the endpoint /v1/query and URIs returned in the response. the other endpoints may change without notices.
      1. next: get the next page (together with progress) with long polling. page N is released when Page N+1 is requested.
      2. state: return struct like QueryResponse, but the data field is empty. short polling.
      3. kill: kill running query
      4. delete: kill running query and delete it

    test_async can be a demo for their usage.

    pub struct HttpQueryRequest {
        pub sql: String,
    }
    
    pub struct QueryResponse {
        pub id: Option<String>,
        pub columns: Option<DataSchemaRef>,
        pub data: JsonBlockRef,
        pub next_uri: Option<String>,
        pub state_uri: Option<String>,
        pub kill_uri: Option<String>,
        pub delete_uri: Option<String>,
        pub query_state: Option<String>,
        pub request_error: Option<String>,
        pub query_error: Option<String>,
        pub query_progress: Option<ProgressValues>,
    }
    
    

    internal

    1. The web layer (query_handlers.rs) is separated from a more structured internal implementation in mod v1/query.
      1. /v1/statement is kept to post raw SQL, may reuse the internal implementation, this will be done some others changes in another PR.
      2. hope it may event be used to support GRPC if need later?
    2. make sure the internal tokio task stop fast when the query is killed by SQL command or HTTP request

    TODO

    soon( new PR in 1 week or so):

    1. classification and organization of Errors returned will be polished later.
    2. more tests.
    3. rewrite /v1/statements.

    Long term:

    1. Add client session support.
    2. Adjust the handler interface to a stable version with a formal doc. only necessary fields are added currently. including 2 JSON formats and optional URL parameters and headers.
    3. better control of memory.

    Changelog

    • New Feature

    Related Issues

    Fixes #2604

    Test Plan

    Unit Tests

    pr-feature community-take 
    opened by youngsofun 63
  • [ci] fix gcov install failed

    [ci] fix gcov install failed

    Signed-off-by: Chojan Shang [email protected]

    I hereby agree to the terms of the CLA available at: https://datafuse.rs/policies/cla/

    Summary

    remove ~/.cargo/bin/ from cache

    Changelog

    • Build/Testing/CI
    • Not for changelog (changelog entry is not required)

    Related Issues

    Fixes #1012

    Test Plan

    No

    pr-build 
    opened by PsiACE 63
  • Implements Feature 630

    Implements Feature 630

    Summary

    It's a baby step of integrating Store with Query, which implements

    • update metadata after appending data parts to the table
    • remote table read_plan
    • remote table read

    Basic statements like insert into ... and select .. from ... could be executed now. (and lots of interesting things are left to do)

    Changelog

    • Store: implementions for ITable read_plan and read a5c42b2e5d14d042f3c3d928a35c625ca32f4410

    • Query: implements RemoteTalbe's read_plan & read b55eacf912bc7985765b870ecd658d505eb75a56

    • Adds remote flag to ReadDataSourcePlan deaea8ea29b4a6d4afb1390cdd3e0d3540b2597c

    • Tweaks stateless test cases ed69c92fc37c01650f17478f4d6e446f828f74ad

    The following issues might be worthy of your concern:

    • Remove trait bound Sync from SendableDataBlockStream ed69c92fc37c01650f17478f4d6e446f828f74ad

      Turns out, at least for now, we do not need this trait bound, and without Sync constraint, SendableDataBlockStream is more stream-combinator friendly.

    • Keep ITable::read_plan as a non-async method

      By using runtime of ctx (and channel). IMHO, change ITable::read_plan to async fn may be too harsh at this stage.

    • Add an extra flag to ReadDataSourcePlan and SourceTransform

      So that we could be aware of operating a remote table(and fetch remote table accordingly). It is a temp workaround, let's postpone it until the Catalog API is ready. SourceTransform::execute and FuseQueryContext are tweaked accordingly. pls see deaea8ea29b4a6d4afb1390cdd3e0d3540b2597c

    Related Issues

    resolves #630

    Test Plan

    • UT & Stateless Testes

    Progress

    • [x] Update meta
    • [x] Flight Service
    • [x] Store Client
    • [x] Remote table - read_plan
    • [x] Remote table - read (read partition)
    • [x] Unit tests & integration tests
    • [x] Multi-Node integration tests
    • [x] Code GC
    • [x] Squash commits
    opened by dantengsky 63
  • [query] refactor: Table::read_plan does not need DatabendQueryContext anymore.

    [query] refactor: Table::read_plan does not need DatabendQueryContext anymore.

    I hereby agree to the terms of the CLA available at: https://databend.rs/policies/cla/

    Summary

    [query] refactor: Table::read_plan does not need DatabendQueryContext anymore.

    Why: Table is a low level concept in databend code base and should be a dependency of some other crate such as common/planner.

    This commit is one of the steps to remove the Table dependency on query.

    Trait Table references DatabendQueryContext as an argument type. In this commit it is replaced with another smaller type TableIOContext so that in future Table can be moved out of crate query.

    TableIOContext provides data-access support, exposes the runtime used by query itself and provides two resource value: max thread number and node list.

    • Add TableIOContext to provide everything a Table need to build a plan or else.

    • Table::read_plan() use TableIOContext as argument to replace DatabendQueryContext.

    • When calling read_plan(), a temporary TableIOContext is built out of DatabendQueryContext.

    • DatabendQueryContext provides two additional supporting methods: get_single_node_table_io_context() and get_cluster_table_io_context().

    • fix: #2072

    Changelog

    • Improvement

    Related Issues

    • #2046
    • #2059
    opened by drmingdrmer 52
  • Cast functions between String and Date16/Date32/DateTime32

    Cast functions between String and Date16/Date32/DateTime32

    I hereby agree to the terms of the CLA available at: https://databend.rs/policies/cla/

    Summary

    Summary about this PR

    Changelog

    • New Feature

    Related Issues

    Related #853

    Test Plan

    Unit Tests

    Stateless Tests

    pr-feature 
    opened by sundy-li 50
  • ISSUE-1639:Remove session_api.rs

    ISSUE-1639:Remove session_api.rs

    I hereby agree to the terms of the CLA available at: https://datafuse.rs/policies/cla/

    Summary

    Remove common/store-api/session_api.rs

    Changelog

    • Improvement

    Related Issues

    Fixes #1639

    Test Plan

    Unit Tests

    Stateless Tests

    opened by jyz0309 39
  • Consider renaming project. DataFuse is too similar to DataFusion.

    Consider renaming project. DataFuse is too similar to DataFusion.

    This project appears to have similar goals to Apache Arrow DataFusion, contains code from DataFusion, and has a very similar name.

    The names "DataFuse" and "DataFusion" only differ by a few characters and this could cause confusion about the relationship between these projects.

    On behalf of the Apache Arrow DataFusion community, who have put a lot of work into building the DataFusion software and brand over the past three years, I respectfully ask that you consider renaming this project.

    opened by andygrove 35
  • feat(meta-rewrite): meta data upgrade program: metarewrite

    feat(meta-rewrite): meta data upgrade program: metarewrite

    I hereby agree to the terms of the CLA available at: https://databend.rs/dev/policies/cla/

    Summary

    feat(meta-rewrite): meta data upgrade program: metarewrite

    What it does:

    This program load meta-service data from a raft-dir and upgrade TableMeta to new version and write them back.

    Both in raft-log data and in state-machine data will be converted.

    Usage:

    • Implement the conversion function: conv_v1_to_v2() in this file.
    • Shut down databend-meta
    • Backup the data: https://databend.rs/doc/deploy/metasrv/metasrv-backup-restore
    • Build it with cargo build --bin databend-metarewrite and run it:
    databend-metarewrite --raft-dir "<./your/raft-dir/>"
    

    Status:

    This program is an upgrade program framework, in which data is read from metadata dir but the conversion function is left a todo!().

    databend-metarewrite is a single crate to avoid introducing unnecessary dependency to other binaries such as databend-query and databend-meta.

    How it works:

    This program only contains the latest data format. To access an old version data format, it just loads an old version of databend as a crate.

    Changelog

    Related Issues

    pr-feature 
    opened by drmingdrmer 3
  • feat(storage): support nested type in read_parquet.

    feat(storage): support nested type in read_parquet.

    I hereby agree to the terms of the CLA available at: https://databend.rs/dev/policies/cla/

    Summary

    • Don't push down filter to parquet reading if the field is a nested type. (arrow2 doesn't support constructing index page reader on nested type).
    • Fix schema mismatch problem.
    • Fix panic problem when reading complex nested types (such as Array(Tuple)).

    Try to fix #9417.

    Note

    • Get fields on complex nested types can not succeed. Will be fixed in later PR.
    > desc ttt;
    +-------+-----------------------+------+---------+-------+
    | Field | Type                  | Null | Default | Extra |
    +-------+-----------------------+------+---------+-------+
    | t     | ARRAY((INT32, INT32)) | NO   | []      |       |
    +-------+-----------------------+------+---------+-------+
    1 row in set (0.050 sec)
    
    > select t[0]:a from ttt;
    ERROR 1105 (HY000): Code: 1065, displayText = no overload satisfies `get((Int32, Int32) NULL, String)`
    
    has tried possible overloads:
      get(Array(Nothing) NULL, UInt64 NULL) :: NULL     : unable to unify `(Int32, Int32)` with `Array(Nothing)`
      get(Array(T0 NULL), UInt64) :: T0 NULL            : unable to unify `(Int32, Int32) NULL` with `Array(T0 NULL)`
      get(Array(T0 NULL) NULL, UInt64 NULL) :: T0 NULL  : unable to unify `(Int32, Int32)` with `Array(T0 NULL)`
      get(Array(T0), UInt64) :: T0 NULL                 : unable to u
    
    pr-feature 
    opened by RinChanNOWWW 1
  • Reserved keyword list is incorrect in information_schema

    Reserved keyword list is incorrect in information_schema

    Not all keywords are reserved. You can query it via token.is_reserved_ident(after_as: bool) and token.is_reserved_function_name(after_as: bool). after_as = true will make some of the reserved keywords unreserved because such keywords are unambiguous after an AS.

    Originally posted by @andylokandy in https://github.com/datafuselabs/databend/pull/9407#discussion_r1059760253

    opened by andylokandy 0
Releases(v0.8.177-nightly)
Owner
Datafuse Labs
The open-source Lakehouse runtime that powers the Modern Data Cloud
Datafuse Labs
High-performance runtime for data analytics applications

Weld Documentation Weld is a language and runtime for improving the performance of data-intensive applications. It optimizes across libraries and func

Weld 2.9k Dec 28, 2022
📊 Cube.js — Open-Source Analytics API for Building Data Apps

?? Cube.js — Open-Source Analytics API for Building Data Apps

Cube.js 14.4k Jan 8, 2023
Cloud native log storage and management for Kubernetes, containerised workloads

Live Demo | Website | API Workspace on Postman Parseable is an open source, cloud native, log storage and management platform. Parseable helps you ing

Parseable, Inc. 715 Jan 1, 2023
DataFrame / Series data processing in Rust

black-jack While PRs are welcome, the approach taken only allows for concrete types (String, f64, i64, ...) I'm not sure this is the way to go. I want

Miles Granger 30 Dec 10, 2022
Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

Apache Arrow Powering In-Memory Analytics Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enabl

The Apache Software Foundation 10.9k Jan 6, 2023
Dataflow is a data processing library, primarily for machine learning

Dataflow Dataflow is a data processing library, primarily for machine learning. It provides efficient pipeline primitives to build a directed acyclic

Sidekick AI 9 Dec 19, 2022
TensorBase is a new big data warehousing with modern efforts.

TensorBase is a new big data warehousing with modern efforts.

null 1.3k Jan 4, 2023
New generation decentralized data warehouse and streaming data pipeline

World's first decentralized real-time data warehouse, on your laptop Docs | Demo | Tutorials | Examples | FAQ | Chat Get Started Watch this introducto

kamu 184 Dec 22, 2022
This library provides a data view for reading and writing data in a byte array.

Docs This library provides a data view for reading and writing data in a byte array. This library requires feature(generic_const_exprs) to be enabled.

null 2 Nov 2, 2022
A fast, powerful, flexible and easy to use open source data analysis and manipulation tool written in Rust

fisher-rs fisher-rs is a Rust library that brings powerful data manipulation and analysis capabilities to Rust developers, inspired by the popular pan

Syed Vilayat Ali Rizvi 5 Aug 31, 2023
A fast, powerful, flexible and easy to use open source data analysis and manipulation tool written in Rust

fisher-rs fisher-rs is a Rust library that brings powerful data manipulation and analysis capabilities to Rust developers, inspired by the popular pan

null 5 Sep 6, 2023
A rust library built to support building time-series based projection models

TimeSeries TimeSeries is a framework for building analytical models in Rust that have a time dimension. Inspiration The inspiration for writing this i

James MacAdie 12 Dec 7, 2022
Rayon: A data parallelism library for Rust

Rayon Rayon is a data-parallelism library for Rust. It is extremely lightweight and makes it easy to convert a sequential computation into a parallel

null 7.8k Jan 8, 2023
ConnectorX - Fastest library to load data from DB to DataFrames in Rust and Python

ConnectorX enables you to load data from databases into Python in the fastest and most memory efficient way.

SFU Database Group 939 Jan 5, 2023
Fill Apache Arrow record batches from an ODBC data source in Rust.

arrow-odbc Fill Apache Arrow arrays from ODBC data sources. This crate is build on top of the arrow and odbc-api crate and enables you to read the dat

Markus Klein 21 Dec 27, 2022
A high-performance, high-reliability observability data pipeline.

Quickstart • Docs • Guides • Integrations • Chat • Download What is Vector? Vector is a high-performance, end-to-end (agent & aggregator) observabilit

Timber 12.1k Jan 2, 2023
Quickwit is a big data search engine.

Quickwit This repository will host Quickwit, the big data search engine developed by Quickwit Inc. We will progressively polish and opensource our cod

Quickwit Inc. 2.9k Jan 7, 2023
A highly efficient daemon for streaming data from Kafka into Delta Lake

A highly efficient daemon for streaming data from Kafka into Delta Lake

Delta Lake 172 Dec 23, 2022
A cross-platform library to retrieve performance statistics data.

A toolkit designed to be a foundation for applications to monitor their performance.

Lark Technologies Pte. Ltd. 155 Nov 12, 2022