Cassandra DB native client written in Rust language. Find 1.x versions on https://github.com/AlexPikalov/cdrs/tree/v.1.x Looking for an async version? - Check WIP https://github.com/AlexPikalov/cdrs-async

Overview

CDRS crates.io version Build Status Build status

CDRS is looking for maintainers

CDRS - Apache Cassandra driver

CDRS is Apache Cassandra driver written in pure Rust.

💡 Looking for an async version?

Features

  • TCP/SSL connection;
  • Load balancing;
  • Connection pooling;
  • LZ4, Snappy compression;
  • Cassandra-to-Rust data deserialization;
  • Pluggable authentication strategies;
  • ScyllaDB support;
  • Server events listening;
  • Multiple CQL version support (3, 4), full spec implementation;
  • Query tracing information.

Documentation and examples

Getting started

Add CDRS to your Cargo.toml file as a dependency:

cdrs = { version = "2" }

Then add it as an external crate to your main.rs:

extern crate cdrs;

use cdrs::authenticators::NoneAuthenticator;
use cdrs::cluster::session::{new as new_session};
use cdrs::cluster::{ClusterTcpConfig, NodeTcpConfigBuilder};
use cdrs::load_balancing::RoundRobin;
use cdrs::query::*;

fn main() {
  let node = NodeTcpConfigBuilder::new("127.0.0.1:9042", NoneAuthenticator {}).build();
  let cluster_config = ClusterTcpConfig(vec![node]);
  let no_compression =
    new_session(&cluster_config, RoundRobin::new()).expect("session should be created");

  let create_ks: &'static str = "CREATE KEYSPACE IF NOT EXISTS test_ks WITH REPLICATION = { \
                                 'class' : 'SimpleStrategy', 'replication_factor' : 1 };";
  no_compression.query(create_ks).expect("Keyspace create error");
}

This example configures a cluster consisting of a single node, and uses round robin load balancing and default r2d2 values for connection pool.

License

This project is licensed under either of

at your option.

Comments
  • Epoch timestamps are being modified by the driver before being stored in Cassandra DB

    Epoch timestamps are being modified by the driver before being stored in Cassandra DB

    For example:

    suppose that there is a table in Cassandra called tableA and it has two columns: ID column and timestamp column.

    id is INT timestamp is DECIMAL

    When executing those lines of code:

    let update_struct_cql: String = format!("update tableA SET  timestamp = ? where id = ?;");
    
    ddb.query_with_values(update_struct_cql,query_values!(current_time, id)).expect("[Err]: Could not update table");  
    

    When I print out timestamp BEFORE it is being used in above query, it will print something normal, like 1545870134, however, when I check column value in Cassandra DB, I see something like:

    6.1805539111220E-825570344

    The question is: What happened to the timestamp ?

    bug question 
    opened by MaenMax 40
  • Proper error handling in as_rust! and new macro for IntoRustByName

    Proper error handling in as_rust! and new macro for IntoRustByName

    1. "Return" Results from as_rust! instead of using unwrap() and unreachable!.

    2. Implement IntoRustByName<_> for Row using a new macro row_into_rust_by_name!. row_into_rust_by_name! uses as_rust! and the implementations change a bit, although I think the new behavior is more accurate. But it still leaves a problem. get_by_name() returns Option<Result<_>>. None if the column can't be found and Some(Err(_)) if the conversion fails. It also returns a custom error if the value is empty (except for blobs) and it doesn't handle null values explicitly. It would be nice if an Option could be returned, that is None in case of a null value. Maybe we could change the return type to Result<Option<_>> and return an error if the column can't be found. I'm not sure what the use case for the current implementation is, is it a common use case to look for a column that might not be there?

    opened by dfaust 23
  • cargo fmt is done

    cargo fmt is done

    have run cargo fmt for all the files and has been incorporated into travis as well. so if we were to push file with a different style travis won't let you build it.

    before pushing run cargo fmt -- --write-mode=diff

    if there is anything coming on the console try cargo fmt -- --write-mode=overwrite cargo test

    then push else build would fail

    opened by harrydevnull 21
  • Adapt examples to ssl feature

    Adapt examples to ssl feature

    Reported in #61

    When --all-features was used, the ssl transport was provided via the transport_ssl module. However, the examples were not prepared for this, and cargo test --all-features would fail.

    As both transports are mutually exclusive, it seems reasonable to just use the same module name and make life easier for clients.

    • [x] fix existing examples
    • [x] add examples which would use ssl
    opened by AlexPikalov 14
  • Add blob example

    Add blob example

    It took me a while to figure out that calling into() on a Vec<u8> in order to convert it into Bytes doesn't work as I expected. impl<T: Into<Bytes> + Clone + Debug> From<Vec<T>> for Bytes exists but I'm not sure what it does exactly.

    opened by dfaust 13
  • updated to the latest rustfmt and clippy warnings

    updated to the latest rustfmt and clippy warnings

    the rustfmt has been changed and it has been incorporated back to the project. right now I have enabled force installation of rustfmt in travis so builds might be slightly slower. once the dust settles down we shall enable the caching of cargo rust fmt

    opened by harrydevnull 11
  • "failed to fill whole buffer" with multiple nodes

    When configuring multiple scylla nodes, insert failes with an error "failed to fill whole buffer":

    thread '<unnamed>' panicked at 'Failed to insert data: Io(Custom { kind: UnexpectedEof, error: StringError("failed to fill whole buffer") })', libcore/result.rs:1009:5
    stack backtrace:
       0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
                 at libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
       1: std::sys_common::backtrace::print
                 at libstd/sys_common/backtrace.rs:71
                 at libstd/sys_common/backtrace.rs:59
       2: std::panicking::default_hook::{{closure}}
                 at libstd/panicking.rs:211
       3: std::panicking::default_hook
                 at libstd/panicking.rs:227
       4: std::panicking::rust_panic_with_hook
                 at libstd/panicking.rs:476
       5: std::panicking::continue_panic_fmt
                 at libstd/panicking.rs:390
       6: rust_begin_unwind
                 at libstd/panicking.rs:325
       7: core::panicking::panic_fmt
                 at libcore/panicking.rs:77
       8: core::result::unwrap_failed
                 at libcore/macros.rs:26
       9: <core::result::Result<T, E>>::expect
                 at libcore/result.rs:835
      10: <unknown>
                 at src/main.rs:62
    

    The issue can be reproduced with the following setup:

    1. Setup scylla cluster with 3 nodes
    docker run --name scylla-0 -p 9042:9042 -d scylladb/scylla
    # wait until scylla-0 is up: docker exec -it scylla-0 nodetool status
    docker run --name scylla-1 -p 9043:9042 -d scylladb/scylla --seeds="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' scylla-0)"
    docker run --name scylla-2 -p 9044:9042 -d scylladb/scylla --seeds="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' scylla-0)"
    
    1. Create column family within cqlsh: docker exec -it scylla-0 cqlsh
    CREATE KEYSPACE test
    WITH replication = {'class': 'SimpleStrategy', 'replication_factor' : 2};
    
    use test;
    
    CREATE TABLE data (
        id int,
        epoch_utc timestamp,
        value double,
        PRIMARY KEY (id, epoch_utc))
    WITH CLUSTERING ORDER BY (epoch_utc DESC)
    AND COMPACTION = {
        'class': 'TimeWindowCompactionStrategy',
        'compaction_window_unit': 'DAYS',
        'compaction_window_size': '1'
    };
    
    1. Cargo.toml
    [package]
    name = "rust_test_scylla"
    version = "0.1.0"
    authors = ["Pascal Sachs <[email protected]>"]
    edition = "2018"
    
    [dependencies]
    cdrs = "^2.0.0-beta.6"
    cdrs_helpers_derive = "0.1.0"
    rand = "^0.6.5"
    
    [profile.dev]
    panic = "abort"
    
    [profile.release]
    panic = "abort"
    
    1. main.rs
    #[macro_use]
    extern crate cdrs;
    #[macro_use]
    extern crate cdrs_helpers_derive;
    
    use std::sync::Arc;
    use std::thread;
    use std::time::SystemTime;
    
    use rand::prelude::*;
    
    use cdrs::authenticators::NoneAuthenticator;
    use cdrs::cluster::session::{new_lz4 as new_session, Session};
    use cdrs::cluster::{ClusterTcpConfig, NodeTcpConfigBuilder, TcpConnectionPool};
    use cdrs::frame::IntoBytes;
    use cdrs::load_balancing::RoundRobinSync;
    use cdrs::query::*;
    use cdrs::types::prelude::*;
    
    type ScyllaSession = Session<RoundRobinSync<TcpConnectionPool<NoneAuthenticator>>>;
    
    const SCYLLA_NODES: &'static [&'static str] =
        &["localhost:9042", "localhost:9043", "localhost:9044"];
    
    const INSERT_DATA: &'static str = "\
                                       INSERT INTO test.data \
                                       (id, epoch_utc, value) \
                                       VALUES (?, ?, ?)";
    
    fn main() {
        let nodes = SCYLLA_NODES
            .into_iter()
            .map(|addr| NodeTcpConfigBuilder::new(addr, NoneAuthenticator {}).build())
            .collect();
        let cluster_config = ClusterTcpConfig(nodes);
        let session: Arc<ScyllaSession> = Arc::new(
            new_session(&cluster_config, RoundRobinSync::new())
                .expect("Could not connect to scylla cluster"),
        );
    
        for i in 0..20 {
            let thread_session = session.clone();
            thread::spawn(move || {
                let mut rng = rand::thread_rng();
                let query_insert_data = thread_session
                    .prepare(INSERT_DATA)
                    .expect("Failed to prepare insert data query");
    
                let id = i;
                let epoch_utc = (1_000
                    * SystemTime::now()
                        .duration_since(SystemTime::UNIX_EPOCH)
                        .unwrap()
                        .as_secs()) as i64;
                let value = rng.gen();
                let values = DataStruct {
                    id,
                    epoch_utc,
                    value,
                }
                .into_query_values();
                thread_session
                    .exec_with_values(&query_insert_data, values)
                    .expect("Failed to insert data");
            })
            .join()
            .expect("thread error");
        }
    }
    
    #[derive(Clone, Debug, IntoCDRSValue, PartialEq)]
    pub struct DataStruct {
        id: i32,
        epoch_utc: i64,
        value: f64,
    }
    
    impl DataStruct {
        fn into_query_values(self) -> QueryValues {
            query_values!(self.id, self.epoch_utc, self.value)
        }
    }
    
    opened by psachs 10
  • Add possibility to clone Row

    Add possibility to clone Row

    Needed to take some vector element without breaking an vector of rows (also needed since try_from_row method can't work with refs)

    (also please tell if there is any communication channel, I had few additional questions)

    opened by cnd 10
  • CDRS driver fails to fetch data from Cassandra, if one or more of the columns has a

    CDRS driver fails to fetch data from Cassandra, if one or more of the columns has a "null" value.

    CDRS drives fails to fetch data from Cassandra, if one or more of the columns has a "null" value.

    Code:

    fn select_struct(session: &CurrentSession) { let select_struct_cql = "SELECT * FROM autopush.message_2018_10 where uaid='185afbf3d7a94fc6a577c20ffc62431d'"; let rows = session .query(select_struct_cql) .expect("query") .get_body() .expect("get body") .into_rows() .expect("into rows"); for row in rows { let my_row: RowStruct = RowStruct::try_from_row(row).expect("into RowStruct"); println!("struct got: {:?}", my_row); } }

    Error details: thread 'main' panicked at 'into RowStruct: General("Column or UDT property 'key' is empty")', libcore/result.rs:1009:5

    Table description:

    CREATE TABLE autopush.message_2018_10 ( uaid text, chidmessageid text, chids set, current_timestamp decimal, data text, headers map<text, text>, timestamp int, ttl int, updateid text, PRIMARY KEY (uaid, chidmessageid) ) WITH CLUSTERING ORDER BY (chidmessageid ASC) AND bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE';

    Example of an entry (Which has been failed) to be fetched into RowStructure:

    185afbf3d7a94fc6a577c20ffc62431d | 02:1545961471067:86ba2621-db55-4749-b38f-f8300133ae17 | null | null | K61oUoC5-EZRDHi2-R-EKBFO_aiV6iYFuAPAZ1zJmJ6F8mxdwaQ | {'crypto_key': 'dh=BM-rwWt2s4L6sFNuqJA6pzNE9zy-h1mbiUU_28hndA-IbyQJM3PLCk5787515X_9QX66IP34b8JM1HXQHzPb-A8', 'encoding': 'aesgcm', 'encryption': 'salt=bB7LtNdeLUXbtJTzM7715Q'} | 1545961468 | 300 | gAAAAABcJX_8_7OWPBENWF-w9BEDpX-O_rILoKS8PQir3c18ZNJN-tElzsrHUoA3uQs_RuJs7qNoSupz4LwUh6SxtkmsyWO7NnwVFyvwUH9mp-qmTl948hQl4fNToqArtq9d5P1oVo2yuqlHTFCf6uvkagt7pxr17kLuJnDgAOG5dvQ_E1qgkr6DYw7E5DpX3up1QLG-OI6e

    question 
    opened by MaenMax 9
  • CI enabled for windows environment

    CI enabled for windows environment

    I have enabled the CI for windows env. unit tests are working but the integration tests are still not working as i was unable to get cassandra installed on the windows image

    opened by harrydevnull 9
  • Inserting null causes runtime error

    Inserting null causes runtime error

    Noticed that when inserting/reading data that contains NULL causes

    fatal runtime error: out of memory

    I reproduced issue below. Couldn't trace error due to lack of details.

    cc @AlexPikalov @harrydevnull

    opened by ernestas-poskus 9
  • How can I resolve anonymous/staticpassword dynamically?

    How can I resolve anonymous/staticpassword dynamically?

    I was trying to make my credentials configurable. Like this:

        let session = if let Some((user, pass)) = pass {
            let auth = StaticPasswordAuthenticator::new(user, pass);
            let node = NodeTcpConfigBuilder::new("127.0.0.1:9042", auth).build();
            let cluster_config = ClusterTcpConfig(vec![node]);
            session::new(&cluster_config, RoundRobin::new()).expect("session should be created")
        } else {
            let auth = NoneAuthenticator {};
            let node = NodeTcpConfigBuilder::new("127.0.0.1:9042", auth).build();
            let cluster_config = ClusterTcpConfig(vec![node]);
            session::new(&cluster_config, RoundRobin::new()).expect("session should be created")
        };
    

    That obviously fails as it will generate incompatible types. What would be the proper way?

    opened by sisso 0
  • Connection pool for CDRS 2.x+

    Connection pool for CDRS 2.x+

    It's more a question than an issue. I'm trying to use CDRS in an actix-web application. I would like to share the database session among the HTTP requests but for that, the struct must implement the Clonetrait. Given that, r2d2 would fit nicely. However, when trying to use TcpConnectionsManager to pass to a Poolfrom r2d2, I can only get a TransportTcp struct and not the Session itself. Am I missing something in order to get a proper session from the r2d2 pool?

    opened by bbarin 0
  • cargo audit gives  lz4-compress is unmaintained

    cargo audit gives lz4-compress is unmaintained

    $ cargo audit
        Fetching advisory database from `https://github.com/RustSec/advisory-db.git`
          Loaded 126 security advisories (from /home/olexiyb/.cargo/advisory-db)
        Updating crates.io index
        Scanning Cargo.lock for vulnerabilities (54 crate dependencies)
         Success No vulnerable packages found
    
    warning: 1 warning found
    
    Crate:  lz4-compress
    Title:  lz4-compress is unmaintained
    Date:   2017-04-17
    URL:    https://rustsec.org/advisories/RUSTSEC-2017-0007
    Dependency tree: 
    lz4-compress 0.1.0
    └── cdrs 2.3.1
    

    I have tried to use lz4-compression, but it requires some improvement in error handling. I also would recommend taking a look into lz-fear

    opened by olexiyb 1
  • Removes clone that is not needed and added a macro to omit duplicate code

    Removes clone that is not needed and added a macro to omit duplicate code

    Note that first https://github.com/AlexPikalov/cdrs/pull/347 and https://github.com/AlexPikalov/cdrs/pull/337 must be merged, since now commits are in this PR that are inside the other PR's

    opened by Jasperav 0
Releases(v2.3.1)
Owner
Alex Pikalov
Alex Pikalov
Cassandra (CQL) driver for Rust, using the DataStax C/C++ driver under the covers.

cassandra-cpp This is a maintained Rust project that exposes the DataStax cpp driver at https://github.com/datastax/cpp-driver/ in a somewhat-sane cra

null 88 Nov 14, 2022
ORM for ScyllaDb and Cassandra

ScyllaDb/Cassandra Object-Relation Mapper Features This library contains several crates with the following features: Automatic map tables to Rust stru

null 33 Oct 20, 2022
Async Lightweight HTTP client using system native library if possible. (Currently under heavy development)

Async Lightweight HTTP Client (aka ALHC) What if we need async but also lightweight http client without using such a large library like reqwest, isahc

SteveXMH 7 Dec 1, 2022
A minecraft-like multi version client implemented in Rust.

Leafish Multi-version Minecraft-compatible client written in Rust, forked from Stevenarella. Chat Chat takes place on Matrix and Discord. The channels

null 610 Nov 19, 2022
The rust client for CeresDB. CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

The rust client for CeresDB. CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

null 12 Nov 18, 2022
Native PostgreSQL driver for the Rust programming language

Rust-Postgres PostgreSQL support for Rust. postgres Documentation A native, synchronous PostgreSQL client. tokio-postgres Documentation A native, asyn

Steven Fackler 2.7k Nov 25, 2022
RisingWave is a cloud-native streaming database that uses SQL as the interface language.

RisingWave is a cloud-native streaming database that uses SQL as the interface language. It is designed to reduce the complexity and cost of building real-time applications. RisingWave consumes streaming data, performs continuous queries, and updates results dynamically. As a database system, RisingWave maintains results inside its own storage and allows users to access data efficiently.

Singularity Data 3.6k Dec 4, 2022
A very WIP RISCV64 OS written in Rust to learn about low-level and OS development

river A very WIP Rust-based RISCV64 OS for learning. The name is based off of the name RISCV with some added letters: "riscv" + er Make sure you have

James [Undefined] 3 Nov 9, 2022
Affine-client is a client for AFFINE based on Tauri

Affine Client affine-client is a client for AFFINE based on Tauri Supported Platforms Windows Linux MacOS Download https://github.com/m1911star/affine

Horus 215 Nov 23, 2022
PickleDB-rs is a lightweight and simple key-value store. It is a Rust version for Python's PickleDB

PickleDB PickleDB is a lightweight and simple key-value store written in Rust, heavily inspired by Python's PickleDB PickleDB is fun and easy to use u

null 150 Nov 20, 2022
Rust version of the Haskell ERD tool. Translates a plain text description of a relational database schema to dot files representing an entity relation diagram.

erd-rs Rust CLI tool for creating entity-relationship diagrams from plain text markup. Based on erd (uses the same input format and output rendering).

Dave Challis 32 Jul 25, 2022
Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres.

SDB - SignatureDB Distributed, version controlled, SQL database with cryptographically verifiable storage, queries and results. Think git for postgres

Fremantle Industries 5 Apr 26, 2022
A simplified version of a Redis server supporting SET/GET commands

This is a starting point for Rust solutions to the "Build Your Own Redis" Challenge. In this challenge, you'll build a toy Redis clone that's capable

Patrick Neilson 2 Nov 15, 2022
A pure Rust database implementation using an append-only B-Tree file format.

nebari nebari - noun - the surface roots that flare out from the base of a bonsai tree Warning: This crate is early in development. The format of the

Khonsu Labs 191 Nov 30, 2022
ForestDB - A Fast Key-Value Storage Engine Based on Hierarchical B+-Tree Trie

ForestDB is a key-value storage engine developed by Couchbase Caching and Storage Team, and its main index structure is built from Hierarchic

null 1.2k Nov 26, 2022
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 4.8k Nov 26, 2022
Plugin for macro-, mini-quad (quads) to save data in simple local storage using Web Storage API in WASM and local file on a native platforms.

quad-storage This is the crate to save data in persistent local storage in miniquad/macroquad environment. In WASM the data persists even if tab or br

ilya sheprut 8 Oct 22, 2022
A high-performance, distributed, schema-less, cloud native time-series database

CeresDB is a high-performance, distributed, schema-less, cloud native time-series database that can handle both time-series and analytics workloads.

null 1.7k Nov 27, 2022
CouchDB client-side library for the Rust programming language

Chill Chill is a client-side CouchDB library for the Rust programming language, available on crates.io. It targets Rust Stable. Chill's three chief de

null 35 Jun 26, 2022