Rust High Performance compile-time ORM(RBSON based)

Overview

WebSite | 简体中文 | Showcase | 案例

Build Status doc.rs unsafe forbidden
dependency status GitHub release Gitter

A highly Performant,Safe,Dynamic SQL(Compile time) ORM framework written in Rust, inspired by Mybatis and MybatisPlus.

Why not diesel or not sqlx ?
Framework Async/.await Learning curve Dynamic SQL/py/Wrapper/built-in CRUD Logical delete plugin Pagination plugin
rbatis easy
sqlx hard (depends on macros and env. variables) x x x
diesel x hard (use FFI, unsafe) x x x
Performance comparison with Golang (in a docker environment)
Framework Mysql(docker) SQL statement(10k) ns/operation(lower is better) Qps(higher is better) Memory usage(lower is better)
Rust-rbatis/tokio 1 CPU, 1G memory select count(1) from table; 965649 ns/op 1035 Qps/s 2.1MB
Go-GoMybatis/http 1 CPU, 1G memory select count(1) from table; 1184503 ns/op 844 Qps/s 28.4MB
  • No Runtimes,No Garbage Collection
  • Zero cost Dynamic SQL, implemented using (proc-macro,compile-time,Cow(Reduce unnecessary cloning)) techniques。 don't need ONGL engine(mybatis)
  • Free deserialization, Auto Deserialize to any struct(Option,Map,Vec...)
  • High performance, Based on Future, with async_std/tokio, single threaded benchmark can easily achieve 200,000 QPS
  • logical deletes, pagination, py-like SQL and basic Mybatis functionalities.
  • Supports logging, customizable logging based on log crate
  • 100% Safe Rust with #![forbid(unsafe_code)] enabled
  • rbatis/example (import into Clion!)
  • abs_admin project an complete background user management system( Vue.js+rbatis+actix-web)

Supported data structures

data structure is supported
Option
Vec
HashMap
i32,i64,f32,f64,bool,String...more rust type
rbatis::Bytes
rbatis::DateNative
rbatis::DateUtc
rbatis::DateTimeNative
rbatis::DateTimeUtc
rbatis::Decimal
rbatis::Json
rbatis::TimeNative
rbatis::TimeUtc
rbatis::Timestamp
rbatis::TimestampZ
rbatis::Uuid
rbatis::plugin::page::{Page, PageRequest}
rbson::Bson*
serde_json::*
any serde type

Supported database √supported .WIP

database is supported
Mysql
Postgres
Sqlite
Mssql/Sqlserver (50%)
MariaDB(Mysql)
TiDB(Mysql)
CockroachDB(Postgres)

Supported OS/Platforms

platform is supported
Linux
Apple/MacOS
Windows

Supported Web Frameworks

Quick example: QueryWrapper and common usages (see example/crud_test.rs for details)
  • Cargo.toml
# add this library,and cargo install

# bson (required)
serde = { version = "1", features = ["derive"] }
rbson = "2.0"

# logging lib(required)
log = "0.4"
fast_log="1.3"

# rbatis (required) default is all-database+runtime-async-std-rustls
rbatis =  { version = "3.0" } 
# also if you use actix-web+mysql
# rbatis = { version = "3.0", default-features = false, features = ["mysql","runtime-async-std-rustls"] }
//#[macro_use] define in 'root crate' or 'mod.rs' or 'main.rs'
#[macro_use]
extern crate rbatis;

use rbatis::crud::CRUD;

/// may also write `CRUDTable` as `impl CRUDTable for BizActivity{}`
/// #[crud_table]
/// #[crud_table(table_name:biz_activity)]
/// #[crud_table(table_name:"biz_activity"|table_columns:"id,name,version,delete_flag")]
/// #[crud_table(table_name:"biz_activity"|table_columns:"id,name,version,delete_flag"|formats_pg:"id:{}::uuid")]
#[crud_table]
#[derive(Clone, Debug)]
pub struct BizActivity {
  pub id: Option<String>,
  pub name: Option<String>,
  pub pc_link: Option<String>,
  pub h5_link: Option<String>,
  pub pc_banner_img: Option<String>,
  pub h5_banner_img: Option<String>,
  pub sort: Option<String>,
  pub status: Option<i32>,
  pub remark: Option<String>,
  pub create_time: Option<rbatis::DateTimeNative>,
  pub version: Option<i32>,
  pub delete_flag: Option<i32>,
}

// this macro will create impl BizActivity{ pub fn id()->&str ..... }
impl_field_name_method!(BizActivity{id,name});

/// (optional) manually implement instead of using `derive(CRUDTable)`. This allows manually rewriting `table_name()` function and supports  code completion in IDE.
/// (option) but this struct require  #[derive(Serialize,Deserialize)]
// use rbatis::crud::CRUDTable;
//impl CRUDTable for BizActivity { 
//    fn table_name()->String{
//        "biz_activity".to_string()
//    }
//    fn table_columns()->String{
//        "id,name,delete_flag".to_string()
//    }
//}
#[tokio::main]
async fn main() {
  /// enable log crate to show sql logs
  fast_log::init(fast_log::config::Config::new().console());
  /// initialize rbatis. May use `lazy_static` crate to define rbatis as a global variable because rbatis is thread safe
  let rb = Rbatis::new();
  /// connect to database  
  rb.link("mysql://root:123456@localhost:3306/test").await.unwrap();
  /// customize connection pool parameters (optional)
// let mut opt =PoolOptions::new();
// opt.max_size=100;
// rb.link_opt("mysql://root:123456@localhost:3306/test",&opt).await.unwrap();
  /// newly constructed wrapper sql logic
  let wrapper = rb.new_wrapper()
          .eq("id", 1)                    //sql:  id = 1
          .and()                          //sql:  and 
          .ne(BizActivity::id(), 1)       //sql:  id <> 1
          .in_array("id", &[1, 2, 3])     //sql:  id in (1,2,3)
          .not_in("id", &[1, 2, 3])       //sql:  id not in (1,2,3)
          .like("name", 1)                //sql:  name like 1
          .or()                           //sql:  or
          .not_like(BizActivity::name(), "asdf")       //sql:  name not like 'asdf'
          .between("create_time", "2020-01-01 00:00:00", "2020-12-12 00:00:00")//sql:  create_time between '2020-01-01 00:00:00' and '2020-01-01 00:00:00'
          .group_by(&["id"])              //sql:  group by id
          .order_by(true, &["id", "name"])//sql:  group by id,name
          ;

  let activity = BizActivity {
    id: Some("12312".to_string()),
    name: None,
    pc_link: None,
    h5_link: None,
    pc_banner_img: None,
    h5_banner_img: None,
    sort: None,
    status: None,
    remark: None,
    create_time: Some(rbatis::DateTimeNative::now()),
    version: Some(1),
    delete_flag: Some(1),
  };
  /// saving
  rb.save(&activity, &[]).await;
//Exec ==> INSERT INTO biz_activity (create_time,delete_flag,h5_banner_img,h5_link,id,name,pc_banner_img,pc_link,remark,sort,status,version) VALUES ( ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? )

  /// batch saving
  rb.save_batch(&vec![activity], &[]).await;
//Exec ==> INSERT INTO biz_activity (create_time,delete_flag,h5_banner_img,h5_link,id,name,pc_banner_img,pc_link,remark,sort,status,version) VALUES ( ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? ),( ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? , ? )

  /// fetch allow None or one result. column you can use BizActivity::id() or "id"
  let result: Option<BizActivity> = rb.fetch_by_column(BizActivity::id(), "1").await.unwrap();
//Query ==> SELECT create_time,delete_flag,h5_banner_img,h5_link,id,name,pc_banner_img,pc_link,remark,sort,status,version  FROM biz_activity WHERE delete_flag = 1  AND id =  ? 

  /// query all
  let result: Vec<BizActivity> = rb.list().await.unwrap();
//Query ==> SELECT create_time,delete_flag,h5_banner_img,h5_link,id,name,pc_banner_img,pc_link,remark,sort,status,version  FROM biz_activity WHERE delete_flag = 1

  ///query by id vec
  let result: Vec<BizActivity> = rb.list_by_column("id", &["1"]).await.unwrap();
//Query ==> SELECT create_time,delete_flag,h5_banner_img,h5_link,id,name,pc_banner_img,pc_link,remark,sort,status,version  FROM biz_activity WHERE delete_flag = 1  AND id IN  (?) 

  ///query by wrapper
  let r: Result<Option<BizActivity>, Error> = rb.fetch_by_wrapper(rb.new_wrapper().eq("id", "1")).await;
//Query ==> SELECT  create_time,delete_flag,h5_banner_img,h5_link,id,name,pc_banner_img,pc_link,remark,sort,status,version  FROM biz_activity WHERE delete_flag = 1  AND id =  ? 

  ///delete
  rb.remove_by_column::<BizActivity, _>("id", &"1").await;
//Exec ==> UPDATE biz_activity SET delete_flag = 0 WHERE id = 1

  ///delete batch
  rb.remove_batch_by_column::<BizActivity, _>("id", &["1", "2"]).await;
//Exec ==> UPDATE biz_activity SET delete_flag = 0 WHERE id IN (  ?  ,  ?  ) 

  ///update
  let mut activity = activity.clone();
  let r = rb.update_by_column("id", &activity).await;
//Exec   ==> update biz_activity set  status = ?, create_time = ?, version = ?, delete_flag = ?  where id = ?
  rb.update_by_wrapper(&activity, rb.new_wrapper().eq("id", "12312"), &[Skip::Value(&serde_json::Value::Null), Skip::Column("id")]).await;
//Exec ==> UPDATE biz_activity SET  create_time =  ? , delete_flag =  ? , status =  ? , version =  ?  WHERE id =  ? 
}

///...more usage,see crud.rs

macros (new addition)

  • Important update (pysql removes runtime, directly compiles to static rust code) This means that the performance of SQL generated using py_sql,html_sql is roughly similar to that of handwritten code.

Because of the compile time, the annotations need to declare the database type to be used

    #[py_sql("select * from biz_activity where delete_flag = 0
                  if name != '':
                    and name=#{name}")]
    async fn py_sql_tx(rb: &Rbatis, tx_id: &String, name: &str) -> Vec<BizActivity> { impled!() }
  • Added html_sql support, a form of organization similar to MyBatis, to facilitate migration of Java systems to Rust( Note that it is also compiled as Rust code at build time and performs close to handwritten code) this is very faster

Because of the compile time, the annotations need to declare the database type to be used

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "https://github.com/rbatis/rbatis_sql/raw/main/mybatis-3-mapper.dtd">
<mapper>
    <select id="select_by_condition">
        select * from biz_activity where
        <if test="name != ''">
            name like #{name}
        </if>
    </select>
</mapper>
    ///select page must have  '?:&PageRequest' arg and return 'Page<?>'
    #[html_sql("example/example.html")]
    async fn select_by_condition(rb: &mut RbatisExecutor<'_,'_>, page_req: &PageRequest, name: &str) -> Page<BizActivity> { impled!() }
use once_cell::sync::Lazy;
pub static RB:Lazy<Rbatis> = Lazy::new(||Rbatis::new());

/// Macro generates execution logic based on method definition, similar to @select dynamic SQL of Java/Mybatis
/// RB is the name referenced locally by Rbatis, for example DAO ::RB, com:: XXX ::RB... Can be
/// The second parameter is the standard driver SQL. Note that the corresponding database parameter mysql is? , pg is $1...
/// macro auto edit method to  'pub async fn select(name: &str) -> rbatis::core::Result<BizActivity> {}'
///
#[sql("select * from biz_activity where id = ?")]
pub async fn select(rb: &Rbatis,name: &str) -> BizActivity {}
//or: pub async fn select(name: &str) -> rbatis::core::Result<BizActivity> {}

#[tokio::test]
pub async fn test_macro() {
    fast_log::init(fast_log::config::Config::new().console());
    RB.link("mysql://root:123456@localhost:3306/test").await.unwrap();
    let a = select(&RB,"1").await.unwrap();
    println!("{:?}", a);
}

Progress - in sequential order

function is supported
CRUD, with built-in CRUD template (built-in CRUD supports logical deletes)
LogSystem (logging component)
Tx(task/Nested transactions)
Py(using py-like statement in SQL)
async/await support
PagePlugin(Pagincation)
LogicDelPlugin
Html(xml) Compile time dynamic SQL)
DataBase Table ConvertPage(Web UI,Coming soon) x
  • Conlusion: Assuming zero time consumed on IO, single threaded benchmark achieves 200K QPS or QPS, which is a few times more performant than GC languages like Go or Java.

FAQ

  • Postgres Types Define Please see Doc

中文

English Doc

  • Support for DateTime and BigDecimal?
    Currently supports chrono::rbatis::DateTimeNative和bigdecimal::BigDecimal
  • Supports for async/.await
    Currently supports both async_std and tokio
  • Stmt in postgres uses $1, $2 instead of ? in Mysql, does this require some special treatment? No, because rbatis uses #{} to describe parametric variabls, you only need to write the correct parameter names and do not need to match it with the symbols used by the database.
  • Supports for Oracle database driver?
    No, moving away from IOE is recommended.
  • Which crate should be depended on if only the driver is needed?
    rbatis-core, Cargo.toml add rbatis-core = "*"
  • How to select async/.await runtime?
    see https://rbatis.github.io/rbatis.io/#/en/
  • column "id" is of type uuid but expression is of type text'?
    see https://rbatis.github.io/rbatis.io/#/en/?id=database-column-formatting-macro
  • How to use '::uuid','::timestamp' on PostgreSQL?
    see https://rbatis.github.io/rbatis.io/#/en/?id=database-column-formatting-macro

Changelog

Roadmap

Contact/donation, or click on star rbatis

  • Gitter

联系方式/捐赠,或 rbatis 点star

捐赠

zxj347284221

联系方式(添加好友请备注'rbatis') 微信群:先加微信,然后拉进群

zxj347284221

Comments
  • High Volume Connection - Transactions | CPU Utilization

    High Volume Connection - Transactions | CPU Utilization

    First, thanks for your efforts on rbatis. Conceptually I think you are really headed in the right direction. I wanted to inquire about expected behavior under a high volume of connections (in my case coming in facilitated by actix). I haven't spent much time investigating this particular issue, but, what I have observed is that under high connection load where those connections are issuing transactions against the database, rbatis quickly exhausts the connection pool and drives the CPU utilization up incredibly high and remains essentially frozen for an indeterminate amount of time.

    I've been investigating the potential use of rbatis as a replacement for my direct interaction with sqlx as I like some of convenience of the insert/update processes. Performance is similar to sqlx in my testing(I realize you use sqlx-core), but I've ran into some issue with transactions. As an example if I do(in the example I setup in my project):

    
            let context = self.connection.get_context_rbatis().await?;
            let conn = self.connection.get_connection_rbatis().await?;
            conn.save(context.as_str(), registrant).await?;
    

    And then run:

    CLionProjects % wrk -c 50 -t 2 http://localhost:8080/v1/registrant/123
    Running 10s test @ http://localhost:8080/v1/registrant/123
      2 threads and 50 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    21.80ms   12.50ms  99.78ms   81.08%
        Req/Sec     1.19k   243.06     1.59k    80.00%
      23788 requests in 10.01s, 27.72MB read
    Requests/sec:   2375.27
    Transfer/sec:      2.77MB
    

    Performance is about 20% slower than my current implementation with sqlx, but fine. There is some abstraction in my own code, but what's happening above is passing a ""(blank) str to the context argument for save(). Ok no problem.

    However, when I do:

            self.connection.start_transaction_rbatis().await?;
            let context = self.connection.get_context_rbatis().await?;
            let conn = self.connection.get_connection_rbatis().await?;
            conn.save(context.as_str(), registrant).await?;
            self.connection.commit_transaction_rbatis().await?;
    
    

    And then run:

    wrk -c 50 -t 2 http://localhost:8080/v1/registrant/123
    Running 10s test @ http://localhost:8080/v1/registrant/123
      2 threads and 50 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    99.89ms   50.21ms 208.38ms   76.74%
        Req/Sec    71.33     52.97   140.00     66.67%
      43 requests in 10.04s, 51.31KB read
    Requests/sec:      4.28
    Transfer/sec:      5.11KB
    

    rbatis quickly exhausts the connection pool(there are currently 100 available connections on postgres) and then proceeds to enter a state on my local workstation where it's occupying nearly 100% of available CPU where it stays indefinitely. The only difference is begin_tx() and commit() are being called across the same save() request.

    What I'm currently doing with sqlx directly is to use acquire() to attach a dedicated connection, loading that into a Arc() and manually issuing BEGIN/COMMIT(and keeping track of things internally). I like your approach of generating a UUID for the context and loading that up into a dashmap of Transactions, but, something is awry with what's happening above. I should also add, if I issue requests at a lower velocity(say 15 connections across 2 threads) rbatis will keep up, and furthermore simply manual requests work flawlessly. This issue is related to the generation of a context, passing that context to save(), then calling commit() under a situation where many connections are simultaneously issuing requests.

    When I run the same series of events(BEGIN/INSERT/COMMIT) on my direct integration with sqlx, I get(and still have available connections in the pool even with 100 connections requested by work over 2 threads):

    CLionProjects % wrk -c 100 -t 2 http://localhost:8080/v1/registrant/123
    Running 10s test @ http://localhost:8080/v1/registrant/123
      2 threads and 100 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    29.66ms   26.09ms 288.41ms   92.14%
        Req/Sec     1.94k   337.15     2.39k    86.50%
      38655 requests in 10.01s, 34.17MB read
    Requests/sec:   3860.37
    Transfer/sec:      3.41MB
    

    And in my case I'm doing:

    self.dedicated_connection = Some(Arc::new(Mutex::new(self.pool.acquire().await?)));

    And then passing that around as necessary and then dropping the connection after commit/rollback manually.

    Again I haven't done a lot of investigation on this in rbatis, but I would not expect rbatis to behave this way when generating a context, passing that to save(), then committing it. Which is essentially what my test is doing, except it's with significant load being generated by wrk.

    Any insights on this? I'd like to use rbatis, but would need to resolve this behavior.

    Thanks again for any insights.

    opened by chriskyndrid 35
  • 是否可以考虑支持通过宏生成类似Mybatis的@Query(

    是否可以考虑支持通过宏生成类似Mybatis的@Query("select* ....)

    比如: #[query(sql="select* from xxx where name = $1")] Option<Vec> selectXxxByName(name: String) {None}

    然后宏会改造这个函数为类似: Option<Vec> selectXxxByName(name:String, pool:Rbatis) { Some(pool.select(....)) }

    opened by Silentdoer 17
  • rbatis.begin_defer(..., false) does not rollback the transaction (PostgreSQL)

    rbatis.begin_defer(..., false) does not rollback the transaction (PostgreSQL)

    Hi,

    I try to write a test on database and want TxManager to rollback the transaction at the end of the test.

    It seems, that rollback does not work.

    Tried:

    • begin_defer()
    • begin_tx_defer()
    • Rbatis.rollback()

    Database is PostgreSQL

    Can you please advise?

    help wanted 
    opened by Aries85 13
  • or()和like()有问题

    or()和like()有问题

    1. 使用like 的时候, 生成的语句在?的左右会有空格,导致找不到数据。 SELECT count(1) FROM t_test WHERE del = 0 AND TITLE LIKE '% ? %' ORDER BY create_time DESC
    2. or 是怎么用的? 会无端端多了一个or,其他的用了and. RB.new_wrapper() .eq("1","1") .or() .like("TITLE",&str) .or() .like("ORIGINAL_NAME",&str) 生成的语句: SELECT count(1) FROM t_test WHERE del = 0 AND 1 = ? OR OR TITLE LIKE '% ? %' OR OR ORIGINAL_NAME LIKE '% ? %'
    opened by RisingStar20 13
  • postgresql serialization error  fail to insert :{

    postgresql serialization error fail to insert :{"Err":"column \"chefid\" of relation \"offer\" does not exist"}

    fail to insert on postgres

    DB Schema ordinal_position | column_name | data_type | character_maximum_length | modifier | notnull | hasdefault ------------------+----------------+-----------+--------------------------+----------+---------+------------ 1 | id | uuid | 16 | -1 | t | t 2 | create_date | timestamp | 8 | -1 | t | t 3 | update_date | timestamp | 8 | -1 | t | t 4 | version | int4 | 4 | -1 | t | f 5 | is_active | bool | 1 | -1 | t | t 6 | dateOfDelivery | timestamp | 8 | -1 | t | f 7 | tokenValue | int4 | 4 | -1 | t | t 8 | maxQuantity | int4 | 4 | -1 | t | t 9 | status | varchar | -1 | 54 | t | t 10 | coordinates | point | 16 | -1 | f | f 11 | chefId | uuid | 16 | -1 | f | f 12 | dishId | uuid | 16 | -1 | t | f (12 rows)

    RUST CODE

    let off = Offer {
            id: Some(Uuid::new_v4()),
            create_date: Some(Utc::now().naive_local()),
            update_date: Some(Utc::now().naive_local()),
            version: Some(1),
            is_active: Some(true),
            date_of_delivery: Some(Utc::now().naive_local()),
            token_value: Some(2),
            max_quantity: Some(2),
            status: Some("OUT_OF_DELIVERY".to_string()),
            // coordinates: None,
            chefId: Some(Uuid::parse_str("ebb787b4-d90c-4b0f-9bbe-7324f3d90efe").unwrap()),
            dishId: Some(Uuid::parse_str("85357370-8aef-4262-bf68-f6d38522cf6b").unwrap()),
        };
        Json(RB.save_batch("", &vec![off]).await)
    

    log

    [2020-09-26T12:22:06Z INFO  rbatis::rbatis] [rbatis] [] Exec ==> INSERT INTO Offer (chefId,create_date,date_of_delivery,dishId,id,is_active,max_quantity,status,token_value,update_date,version) VALUES ( $1 ,cast( $2  as timestamp),cast( $3  as timestamp), $4 , $5 , $6 , $7 , $8 , $9 ,cast( $10  as timestamp), $11 )
    [2020-09-26T12:22:06Z INFO  rbatis::rbatis] [rbatis] [] Args ==> ["ebb787b4-d90c-4b0f-9bbe-7324f3d90efe","2020-09-26T12:22:06.537112100","2020-09-26T12:22:06.537112100","85357370-8aef-4262-bf68-f6d38522cf6b","3d30b1fa-4d41-4b54-88e4-9e9065904b9f",true,2,"OUT_OF_DELIVERY",2,"2020-09-26T12:22:06.537112100",1]
    [2020-09-26T12:22:06Z INFO  rbatis::rbatis] [rbatis] [] RowsAffected <== 0
    
    opened by insanebaba 13
  • mysql IN clause not working with vector

    mysql IN clause not working with vector

    • my struct:
    impl_select!(User{select_by_user_role(user:String, roles: Vec<String>) -> Option => "`where user = #{user} and role IN (#{roles}) limit 1`"});
    #[derive(Clone, Debug, Serialize, Deserialize)]
    pub struct User {
        pub username: String,
    }
    
    • log:
    [rbatis] [428143780235972608] ==> select * from user where user = ? and role IN (?) limit 1
    [rbatis]               Args   ==> ["test",["service_admin"]]
    
    • I would expect generated query to be: select * from upis_role_ejb_realm net_id where net_id = "test" and role IN ("service_admin");
    help wanted 
    opened by sysmat 10
  • Is there something wrong with postgres driver when excuting query include a camelCase columns?

    Is there something wrong with postgres driver when excuting query include a camelCase columns?

    code:

    #![allow(unused_must_use)]
    #![allow(non_snake_case)]
    #[macro_use]
    extern crate rbatis;
    
    use actix_web::{web, App, HttpResponse, HttpServer, Responder};
    // use chrono::NaiveDateTime;
    use rbatis::crud::{CRUD};
    use rbatis::rbatis::Rbatis;
    use rbatis::core::runtime::sync::Arc;
    
    // #[crud_table(table_name:"task" | table_columns:"id,status,sequence_id,functor_id,total,current,complete,startTime,completeTime,create_at,update_at")]
    #[crud_table]
    #[derive(Clone, Debug)]
    pub struct Task {
        pub id: Option<String>,
        pub status: Option<String>,
        pub sequence_id: Option<String>,
        pub functor_id: Option<String>,
        pub total: Option<i32>,
        pub current: Option<i32>,
        pub complete: Option<i32>,
        pub startTime: Option<i64>,
        pub completeTime: Option<i64>,
        pub created_at: Option<String>,
        pub updated_at: Option<String>,
    }
    
    impl Default for Task {
        fn default() -> Self {
            Task {
                id: None,
                status: None,
                sequence_id: None,
                functor_id: None,
                total: None,
                current: None,
                complete: None,
                startTime: None,
                completeTime: None,
                created_at: None,
                updated_at: None,
            }
        }
    }
    
    pub const PG_URL: &'static str = "postgresql://postgres:123456@abyssii:35432/ac";
    pub const MYSQL_URL: &'static str = "mysql://root:123456@abyssii:33060/ma";
    
    async fn postgres(rb: web::Data<Arc<RbatisConnection>>) -> impl Responder {
        let v = rb.postgres.fetch_list::<Task>().await.unwrap();
        HttpResponse::Ok().json(serde_json::json!(v).to_string())
    }
    
    async fn mysql(rb: web::Data<Arc<RbatisConnection>>) -> impl Responder {
        let v = rb.mysql.fetch_list::<Task>().await.unwrap();
        HttpResponse::Ok().json(serde_json::json!(v).to_string())
    }
    
    struct RbatisConnection {
        postgres: Rbatis,
        mysql: Rbatis,
    }
    
    #[actix_web::main]
    async fn main() -> std::io::Result<()> {
        //log
        fast_log::init_log("requests.log", 1000, log::Level::Info, None, true);
        //init rbatis . also you can use  lazy_static! { static ref RB: Rbatis = Rbatis::new(); } replace this
        log::info!("linking database...");
        let pgrb = Rbatis::new();
        pgrb.link(PG_URL).await.expect("rbatis link postgres fail");
        let myrb = Rbatis::new();
        myrb.link(MYSQL_URL).await.expect("rbatis link mysql fail");
    
        let rb: RbatisConnection = RbatisConnection {
            postgres: pgrb,
            mysql: myrb,
        };
    
        let rb = Arc::new(rb);
    
        log::info!("linking database successful!");
        //router
        HttpServer::new(move || {
            App::new()
                //add into actix-web data
                .data(rb.to_owned())
                .route("/pg", web::get().to(postgres))
                .route("/my", web::get().to(mysql))
        })
            .bind("0.0.0.0:8000")?
            .run()
            .await
    }
    

    debug_mode output:

    ............gen impl CRUDTable:
     #[derive(serde :: Serialize, serde :: Deserialize)] #[derive(Clone, Debug)]
    pub struct Task
    {
        pub id : Option < String >, pub status : Option < String >, pub
        sequence_id : Option < String >, pub functor_id : Option < String >, pub
        total : Option < i32 >, pub current : Option < i32 >, pub complete :
        Option < i32 >, pub startTime : Option < i64 >, pub completeTime : Option
        < i64 >, pub created_at : Option < String >, pub updated_at : Option <
        String >,
    } impl rbatis :: crud :: CRUDTable for Task
    {
        fn get(& self, column : & str) -> serde_json :: Value
        {
            return match column
            {
                "id" => { return serde_json :: json! (& self.id) ; } "status" =>
                { return serde_json :: json! (& self.status) ; } "sequence_id" =>
                { return serde_json :: json! (& self.sequence_id) ; } "functor_id"
                => { return serde_json :: json! (& self.functor_id) ; } "total" =>
                { return serde_json :: json! (& self.total) ; } "current" =>
                { return serde_json :: json! (& self.current) ; } "complete" =>
                { return serde_json :: json! (& self.complete) ; } "startTime" =>
                { return serde_json :: json! (& self.startTime) ; } "completeTime"
                => { return serde_json :: json! (& self.completeTime) ; }
                "created_at" =>
                { return serde_json :: json! (& self.created_at) ; } "updated_at"
                => { return serde_json :: json! (& self.updated_at) ; } _ =>
                { serde_json :: Value :: Null }
            }
        } fn table_name() -> String { "task".to_string() } fn table_columns() ->
        String
        {
            "id,status,sequence_id,functor_id,total,current,complete,startTime,completeTime,created_at,updated_at".to_string()
        } fn formats(driver_type : & rbatis :: core :: db :: DriverType) -> std ::
        collections :: HashMap < String, fn(arg : & str) -> String >
        {
            let mut m : std :: collections :: HashMap < String, fn(arg : & str) ->
            String > = std :: collections :: HashMap :: new() ; match driver_type
            {
                rbatis :: core :: db :: DriverType :: Mysql => { return m ; },
                rbatis :: core :: db :: DriverType :: Postgres => { return m ; },
                rbatis :: core :: db :: DriverType :: Sqlite => { return m ; },
                rbatis :: core :: db :: DriverType :: Mssql => { return m ; },
                rbatis :: core :: db :: DriverType :: None => { return m ; },
            }
        }
    }
    ............gen impl CRUDTable end............
    

    mysql table:

    -- auto-generated definition
    create table task
    (
        id           char(36)     not null
            primary key,
        status       varchar(100) null,
        sequence_id  char(36)     null,
        functor_id   varchar(100) null,
        total        int          null,
        current      int          null,
        complete     int          null,
        startTime    bigint       null,
        completeTime bigint       null,
        created_at   timestamp    not null,
        updated_at   timestamp    not null
    );
    

    postgres table:

    -- auto-generated definition
    create table task
    (
        id             uuid                     not null
            constraint task_pkey
                primary key,
        status         varchar(100),
        sequence_id    uuid,
        functor_id     varchar(100),
        total          integer,
        current        integer,
        complete       integer,
        "startTime"    bigint,
        "completeTime" bigint,
        created_at     timestamp with time zone not null,
        updated_at     timestamp with time zone not null
    );
    
    alter table task
        owner to postgres;
    

    work with mysql, perfect! image image

    work with postgres: image image postgres log: image

    bug 
    opened by PhoenSXar 10
  • 静态定义一个Rabtis实例,执行第一个sql没问题,第二个就卡主了

    静态定义一个Rabtis实例,执行第一个sql没问题,第二个就卡主了

    lazy_static!{ pub static ref Rb: Rbatis<'static>={ let rb = Rbatis::new(); async_std::task::block_on(async{ rb.link(&CONFIG.database.url).await; }); return rb; }; }

    Rb.fetch_by_wrapper()===>success

    Rb.remove_by_wrapper() ===>卡主一直不会动

    ============================== 我改成 pub async fn connect()->Rbatis<'static>{ let rb = Rbatis::new(); rb.link(&CONFIG.database.url).await; return rb; }

    connecti().await.fetch_by_wrapper()===>success connecti().await.remove_by_wrapper()===>success 这个就正常了。请问这个是什么原因

    opened by adminSxs 10
  • How to use postgresl jsonb type

    How to use postgresl jsonb type

    db design

    CREATE TABLE IF NOT EXISTS categories
    (
        id              UUID      DEFAULT uuid_generate_v4(),
        created_at      TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
        updated_at      TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
        uuid            VARCHAR NOT NULL UNIQUE,
        name            VARCHAR NOT NULL,
        default_setting jsonb   NOT NULL,
        PRIMARY KEY (id)
    );
    

    model

    #[crud_table(table_name:categories| formats_pg: "id:{}::uuid,created_at:{}::timestamp,updated_at:{}::timestamp,default_setting:{}::jsonb")]
    #[derive(Debug, Default, Serialize, Deserialize, Clone)]
    #[serde(rename_all = "camelCase")]
    pub struct Category {
        pub id: Option<Uuid>,
        pub created_at: Option<DateTimeNative>,
        pub updated_at: Option<DateTimeNative>,
        pub uuid: String,
        pub name: String,
        pub default_setting: Option<String>,
    }
    

    run save

     let category = Category {
                uuid: input.uuid,
                name: input.name,
                default_setting: Some(input.default_setting.to_string()),
                ..Default::default()
            };
            DB.save(
                &category,
                &[
                    Skip::Column("id"),
                    Skip::Column("created_at"),
                    Skip::Column("updated_at"),
                ],
            )
            .await?;
    

    and got

    2021-11-03 12:19:47.140014700 UTC    INFO rbatis::plugin::log - [rbatis] [] Exec   ==> insert into categories (uuid,name,default_setting) values ($1,$2,$3::jsonb)
                                                                    [rbatis] [] Args   ==> ["234", "34636", null]
    
    2021-11-03 12:19:47.155176800 UTC    ERROR rbatis::plugin::log - [rbatis] [] ReturnErr  <== error returned from database: 无法把类型 integer 转换为 jsonb  (D:\rustup\.cargo\registry\src\mirrors.sjtug.sjtu.edu.cn-7a04d2510079875b\rbatis-3.0.1\src\plugin\log.rs:43)
    
    opened by foxzool 9
  • BUG:update_by_wrapper update非数据库column字段

    BUG:update_by_wrapper update非数据库column字段

    我建立了一个struct和trait如下:

    #[derive(Serialize, Deserialize, Clone, Debug)]
    pub struct IpInfo {
        pub id: Option<u32>,
        pub net_group_id: Option<u32>,
        pub ip_pool_id: Option<u32>,
        pub ip: Option<String>,
        pub mask: Option<String>,
        pub gateway: Option<String>,
        pub status: Option<u8>, //0-> unused 1-> using 2-> reserved 3->abandoned
        pub pod: Option<String>,
        pub create_time: Option<NaiveDateTime>,
    
        //none table column
        pub ip_pool: Option<String>
    }
    
    impl CRUDTable for IpInfo {
        type IdType = u32; //默认提供IdType类型即可,接口里其他的method默认使用json序列化实现
        fn get_id(&self) -> Option<&Self::IdType> {
            return self.id.as_ref();
        } // 必须实现获取id值
        //fn table_name() -> String {} //可重写,默认ip_pool
        fn table_columns() -> String {
            "id,net_group_id,ip_pool_id,ip,mask,gateway,status,pod,create_time".to_string()
        } //可重写
        //fn format_chain() -> Vec<Box<dyn ColumnFormat>>{} //可重写
    }
    

    其中ip_pool字段不是数据库字段,只是后续给前端返回需要填上去的。所以我在impl CRUDTable里声明了table_columns(),把它去掉了,增删查没什么问题,但是 db.update_by_wrapper::<IpInfo>("", &mut update_ip, &w, true)的时候,发现log里面生成的sql,如下:

    2021-04-08 02:48:40.399983300 +00:00 INFO rbatis::plugin::log - [rbatis] [] Exec  ==> update ip_info set  create_time = ?, gateway = ?, ip = ?, ip_pool = ?, ip_pool_id = ?, mask = ?, net_group_id = ?, pod = ?, status = ?  where id = ?
                                                                    [rbatis] [] Args  ==> ["2021-04-07T12:38:50","10.85.243.1","10.85.243.20",null,3,"255.255.255.0",3,null,0,277]
    thread 'actix-rt:worker:1' panicked at 'called `Result::unwrap()` on an `Err` value: E("error returned from database: 1054 (42S22): Unknown column \'ip_pool\' in \'field list\'")', src/ip_manage.rs:125:10
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    

    居然update了ip_pool,这个非数据库字段,导致报错。。。 我排查了一下。发现这个方法是直接json!打包然后更新,并未读取table_columns()然后做更新,这是个问题。盼及时修复!,我在生产使用,太难了。。。

    opened by JinAirsOs 9
  • 分页插件,统计sql 多了 ORDER BY,PostgreSQL数据库

    分页插件,统计sql 多了 ORDER BY,PostgreSQL数据库

    let req = PageRequest::new(1, 1); //分页请求,页码,条数
        let wraper = RB
            .new_wrapper()
            .eq("1", 1)
            .order_by(false, &["create_date"])
            .check()
            .unwrap();
        let r: rbatis_core::Result<Page<CyCustomZtzts>> =
            RB.fetch_page_by_wrapper("", &wraper, &req).await;
    

    生成了sql:

    SELECT count(1) FROM cy_custom_ztzts WHERE 1 = $1 ORDER BY create_date DESC
    

    含有ORDER BY 无法正常运行

    opened by ciyool 9
  • Mysql show 语句只打印字段名无内容输出

    Mysql show 语句只打印字段名无内容输出

    hi

    使用mysql show 语句只打印字段名,无内容输出,代码如下

        let sql_show_ssl_cipher = "SHOW STATUS LIKE 'Ssl_cipher'";
        let cipher_rbatis = rb
            .fetch_decode::<Vec<HashMap<String, String>>>(sql_show_ssl_cipher, vec![])
            .await;
        println!(">>>>> Cipher in use from rbatis: {:?}", cipher_rbatis);
    

    运行结果输出如下

    >>>>> Cipher in use from rbatis: Ok([{"Variable_name": "Ssl_cipher", "Value": ""}])
    

    是哪里用错了吗?谢谢

    help wanted 
    opened by jiashiwen 1
Releases(v4.0.47)
  • v4.0.47(Dec 30, 2022)

    v4.0.47

    • edit TableSync plugin(support PRIMARY KEY AUTOINCREMENT NOT NULL)
    use rbatis::rbatis::Rbatis;
    use rbatis::rbdc::datetime::FastDateTime;
    use rbatis::table_sync::{SqliteTableSync, TableSync};
    use rbdc_sqlite::driver::SqliteDriver;
    use rbs::to_value;
    
    #[derive(Clone, Debug, serde::Serialize, serde::Deserialize)]
    pub struct RBUser {
        pub id: i32,
        pub name: Option<String>,
        pub remark: Option<String>,
        pub create_time: Option<FastDateTime>,
        pub version: Option<i64>,
        pub delete_flag: Option<i32>,
    }
    
    #[tokio::main]
    pub async fn main() {
        fast_log::init(fast_log::Config::new().console()).expect("rbatis init fail");
        let rb = Rbatis::new();
        rb.init(SqliteDriver {}, &format!("sqlite://target/sqlite.db"))
            .unwrap();
        let mut s = SqliteTableSync::default();
        s.sql_id = " PRIMARY KEY AUTOINCREMENT NOT NULL ".to_string();
        s.sync(rb.acquire().await.unwrap(), to_value!(RBUser {
            id: 0,
            name: Some("".to_string()),
            remark: Some("".to_string()),
            create_time: Some(FastDateTime::utc()),
            version: Some(1),
            delete_flag: Some(1),
        }), "rb_user")
            .await
            .unwrap();
    }
    
    
    Source code(tar.gz)
    Source code(zip)
  • v4.0.46(Dec 23, 2022)

    v4.0.46

    • support new method sql()

    for example

    use rbatis::sql::IntoSql;
    impl_select!(BizActivity{select_by_method(ids:&[&str],logic:HashMap<&str,Value>) -> Option => "`where ${logic.sql()} and id in ${ids.sql()} limit 1`"});
    
    //use
    #[derive(Clone, Debug, Serialize, Deserialize)]
    pub struct BizActivity {
        pub id: Option<String>,
        pub name: Option<String>,
        pub pc_link: Option<String>,
        pub h5_link: Option<String>,
        pub pc_banner_img: Option<String>,
        pub h5_banner_img: Option<String>,
        pub sort: Option<String>,
        pub status: Option<i32>,
        pub remark: Option<String>,
        pub create_time: Option<FastDateTime>,
        pub version: Option<i64>,
        pub delete_flag: Option<i32>,
    }
    #[tokio::main]
    pub async fn main() {
     fast_log::init( fast_log::Config::new()  .console()  .level(log::LevelFilter::Debug)  ) .expect("rbatis init fail");
        let mut rb = init_db().await;
        let mut logic = HashMap::new();
        logic.insert("and id = ", Value::I32(1));
        logic.insert("and id != ", Value::I32(2));
        let data = BizActivity::select_by_method(&mut rb, &["1", "2"], logic).await;
    }
    
    2022-12-24 00:26:03.7494548 INFO rbatis::plugin::log - [rbatis] [448884328362020865] Fetch  ==> select * from biz_activity where id in ('1','2') and id != 2 and id = 1   limit 1
    
    Source code(tar.gz)
    Source code(zip)
  • v4.0.44(Dec 6, 2022)

  • v4.0.43(Nov 17, 2022)

    v4.0.43

    • support Add and delete interceptors dynamically
    
    pub struct LogicDeletePlugin {}
    
    impl SqlIntercept for LogicDeletePlugin {
        fn do_intercept(
            &self,
            _rb: &Rbatis,
            sql: &mut String,
            _args: &mut Vec<Value>,
            _is_prepared_sql: bool,
        ) -> Result<(), Error> {
            println!("[LogicDeletePlugin] sql=> {}", sql);
            Ok(())
        }
    }
    
    #[tokio::main]
    pub async fn main() {
     let rb = Rbatis::new();
        rb.init(
            rbdc_sqlite::driver::SqliteDriver {},
            "sqlite://target/sqlite.db",
        )
        .unwrap();
    
    // Add dynamically,don't need mut
     rb.sql_intercepts.push(Box::new(LogicDeletePlugin {}));
    }
    
    Source code(tar.gz)
    Source code(zip)
  • v4.0.42(Nov 8, 2022)

  • v4.0.41(Nov 7, 2022)

  • v4.0.40(Nov 4, 2022)

  • v4.0.39(Sep 16, 2022)

    v4.0.39

    • Automatically determine whether it is currently in debug_mode. and show build detail if debug_mode is open!
    • on debug_mode use log::debug!() print database rows
    • if --release , debug_mode auto disable
    Source code(tar.gz)
    Source code(zip)
  • v4.0.37(Sep 16, 2022)

  • v4.0.36(Sep 15, 2022)

  • v4.0.35(Aug 30, 2022)

  • v4.0.34(Aug 26, 2022)

  • v4.0.32(Aug 26, 2022)

  • v4.0.31(Aug 25, 2022)

  • v4.0.30(Aug 24, 2022)

  • v4.0.29(Aug 24, 2022)

  • v4.0.27(Aug 24, 2022)

  • v4.0.26(Aug 23, 2022)

    v4.0.26

    • change macro of htmlsql_select_page!
    #[macro_use]
    extern crate rbatis;
    
    use log::LevelFilter;
    use rbatis::rbatis::Rbatis;
    use rbatis::rbdc::datetime::FastDateTime;
    use rbdc_sqlite::driver::SqliteDriver;
    use serde::{Deserialize, Serialize};
    use std::fs::File;
    use std::io::Read;
    
    htmlsql_select_page!(select_page_data(name: &str, dt: &FastDateTime) -> BizActivity => "example/example.html");
    
    #[tokio::main]
    pub async fn main() {
        fast_log::init(fast_log::Config::new().console()).expect("rbatis init fail");
        let rb = Rbatis::new();
        rb.link(
            SqliteDriver {},
            &format!("sqlite://target/sqlite.db"),
        )
        .await
        .unwrap();
        let a = select_page_data(&mut rb.clone(),
                                              &PageRequest::new(1, 10),
                                              "test",
                                              &FastDateTime::now().set_micro(0))
            .await
            .unwrap();
        println!("{:?}", a);
    }
    
    Source code(tar.gz)
    Source code(zip)
  • v4.0.25(Aug 23, 2022)

  • v4.0.24(Aug 23, 2022)

    v4.0.24

    • add macro htmlsql_select_page!() for example:
    
    htmlsql_select_page!(BizActivity{select_page_data(name: &str, dt: &FastDateTime) => "example/example.html"});
    
    #[tokio::main]
    pub async fn main() {
        fast_log::init(fast_log::Config::new().console()).expect("rbatis init fail");
        //use static ref
          let rb = Rbatis::new();
       rb.link(
            SqliteDriver {},
            &format!("sqlite://{}target/sqlite.db", path),
        )
        .await
        .unwrap();
        let a = BizActivity::select_page_data(&mut rb.clone(),
                                              &PageRequest::new(1, 10),
                                              "test",
                                              &FastDateTime::now().set_micro(0))
            .await
            .unwrap();
        println!("{:?}", a);
    }
    
    Source code(tar.gz)
    Source code(zip)
  • v4.0.23(Aug 21, 2022)

  • v4.0.20(Aug 17, 2022)

  • v4.0.18(Aug 14, 2022)

  • v4.0.16(Aug 14, 2022)

  • v4.0.15(Aug 13, 2022)

    v4.0.15

    • change log plugin with add fn:
        fn set_change_level_filter(&mut self, f: HashMap<LevelFilter, LevelFilter>);
        fn get_change_level_filter(&self) -> &HashMap<LevelFilter, LevelFilter>;
    
    • ObjectId add method u128() with_u128()
    • flume disable default feature
    Source code(tar.gz)
    Source code(zip)
  • v4.0.14(Aug 12, 2022)

  • v4.0.13(Aug 11, 2022)

  • v4.0.12(Aug 11, 2022)

  • v4.0.11(Aug 11, 2022)

  • v4.0.10(Aug 11, 2022)

Owner
rbatis
Rbatis-Flexible. Intelligent. Efficient ORM
rbatis
A high-performance, distributed, schema-less, cloud native time-series database

CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

null 1.8k Dec 30, 2022
🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, SQLite, and MSSQL.

SQLx ?? The Rust SQL Toolkit Install | Usage | Docs Built with ❤️ by The LaunchBadge team SQLx is an async, pure Rust† SQL crate featuring compile-tim

launchbadge 7.6k Dec 31, 2022
A safe, extensible ORM and Query Builder for Rust

A safe, extensible ORM and Query Builder for Rust API Documentation: latest release – master branch Homepage Diesel gets rid of the boilerplate for da

Diesel 9.7k Jan 3, 2023
an orm for rust

rustorm Rustorm Rustorm is an SQL-centered ORM with focus on ease of use on conversion of database types to their appropriate rust type. Selecting rec

Jovansonlee Cesar 236 Dec 19, 2022
🐚 An async & dynamic ORM for Rust

SeaORM ?? An async & dynamic ORM for Rust SeaORM SeaORM is a relational ORM to help you build web services in Rust with the familiarity of dynamic lan

SeaQL 3.5k Jan 6, 2023
Diesel - A safe, extensible ORM and Query Builder for Rust

A safe, extensible ORM and Query Builder for Rust API Documentation: latest release – master branch Homepage Diesel gets rid of the boilerplate for da

Takayuki Maeda 0 Aug 31, 2020
Ormlite - An ORM in Rust for developers that love SQL.

ormlite ormlite is an ORM in Rust for developers that love SQL. It provides the following, while staying close to SQL, both in syntax and performance:

Kurt Wolf 28 Jan 1, 2023
Diesel - ORM and Query Builder for Rust

A safe, extensible ORM and Query Builder for Rust API Documentation: latest release – master branch Homepage Diesel gets rid of the boilerplate for da

Diesel 9.7k Jan 6, 2023
ORM for ScyllaDb and Cassandra

ScyllaDb/Cassandra Object-Relation Mapper Features This library contains several crates with the following features: Automatic map tables to Rust stru

null 36 Jan 1, 2023
CRUD system of book-management with ORM and JWT for educational purposes.

Book management English | 中文 Required Rust MySQL 5.7 Usage Execute init.sql to create tables. Set environment variable DATABASE_URL and JWT_SECRET in

null 32 Dec 28, 2022
Bind the Prisma ORM query engine to any programming language you like ❤️

Prisma Query Engine C API Bind the Prisma ORM query engine to any programming language you like ❤️ Features Rust bindings for the C API Static link li

Prisma ORM for community 10 Dec 15, 2022
Bind the Prisma ORM query engine to any programming language you like ❤️

Prisma Query Engine C API Bind the Prisma ORM query engine to any programming language you like ❤️ Features Rust bindings for the C API Static link li

Odroe 6 Sep 9, 2022
A query builder that builds and typechecks queries at compile time

typed-qb: a compile-time typed "query builder" typed-qb is a compile-time, typed, query builder. The goal of this crate is to explore the gap between

ferrouille 3 Jan 22, 2022
A prototype of a high-performance KV database built with Rust.

async-redis A prototype of a high-performance KV database built with Rust. Author: 3andero 11/10/2021 Overview The project starts as a fork of mini-re

null 3 Nov 29, 2022
Quick Pool: High Performance Rust Async Resource Pool

Quick Pool High Performance Rust Async Resource Pool Usage DBCP Database Backend Adapter Version PostgreSQL tokio-postgres qp-postgres Example use asy

Seungjae Park 13 Aug 23, 2022
High performance and distributed KV store w/ REST API. 🦀

About Lucid KV High performance and distributed KV store w/ REST API. ?? Introduction Lucid is an high performance, secure and distributed key-value s

Lucid ᵏᵛ 306 Dec 28, 2022
🔥🌲 High-performance Merkle key/value store

merk High-performance Merkle key/value store Merk is a crypto key/value store - more specifically, it's a Merkle AVL tree built on top of RocksDB (Fac

Nomic 189 Dec 13, 2022
High-performance, lock-free local and concurrent object memory pool with automated allocation, cleanup, and verification.

Opool: Fast lock-free concurrent and local object pool Opool is a high-performance Rust library that offers a concurrent and local object pool impleme

Khashayar Fereidani 8 Jun 3, 2023
A high-performance storage engine for modern hardware and platforms.

PhotonDB A high-performance storage engine for modern hardware and platforms. PhotonDB is designed from scratch to leverage the power of modern multi-

PhotonDB 466 Jun 22, 2023