Pure Rust library for Apache ZooKeeper built on MIO

Overview

rust-zookeeper

Build Status Coverage Status Version License

Zookeeper client written 100% in Rust

This library is intended to be equivalent with the official (low-level) ZooKeeper client which ships with the official ZK distribution.

I have plans to implement recipes and more complex Curator like logic as well, but that takes a lot of time, so pull requests are more than welcome! At the moment only PathChildrenCache is implemented.

Usage

Put this in your Cargo.toml:

[dependencies]
zookeeper = "0.5"

And this in your crate root:

extern crate zookeeper;

Examples

Check the examples directory

Feature and Bug Handling

Also if you find a bug or would like to see a feature implemented please raise an issue or send a pull-request.

Documentation

Documentation is available on the gh-pages branch.

Build and develop

cd zk-test-cluster
mvn clean package
cd ..
cargo test

Contributing

All contributions are welcome! If you need some inspiration, please take a look at the currently open issues.

Comments
  • Validate that serialization is fully correct

    Validate that serialization is fully correct

    While playing with the library while writing https://github.com/rgs1/zk-shell-rs, all seems to work. Though when trying to capture packets via zk-dump (https://github.com/twitter/zktraffic), I don't see anything (see https://github.com/twitter/zktraffic/issues/56).

    I'll keep digging to make sure that rust-zookeeper is generating correct jute serialized packets (though, given that the server does see my commands, it seems to be right - but there might be some bugs).

    opened by rgs1 7
  • IO thread panic on Zookeeper client reconnect failure

    IO thread panic on Zookeeper client reconnect failure

    I have a use case where I am utilizing a zookeeper::ZooKeeper client instance to maintain an ephemeral znode while my application does other work. I've found that the client panics in its reconnection logic on an internal thread when I kill the zookeeper server that I am testing with. This leaves my application running but without the client connection in a functional state.

    The backtrace that I see is the following:

    thread 'io' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/libcore/result.rs:1009:5
    note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
    stack backtrace:
       0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
                 at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
       1: std::sys_common::backtrace::_print
                 at src/libstd/sys_common/backtrace.rs:71
       2: std::panicking::default_hook::{{closure}}
                 at src/libstd/sys_common/backtrace.rs:59
                 at src/libstd/panicking.rs:211
       3: std::panicking::default_hook
                 at src/libstd/panicking.rs:227
       4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
                 at src/libstd/panicking.rs:491
       5: std::panicking::continue_panic_fmt
                 at src/libstd/panicking.rs:398
       6: std::panicking::try::do_call
                 at src/libstd/panicking.rs:325
       7: core::char::methods::<impl char>::escape_debug
                 at src/libcore/panicking.rs:95
       8: core::alloc::Layout::repeat
                 at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libcore/macros.rs:26
       9: <zookeeper::acl::Acl as core::clone::Clone>::clone
                 at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libcore/result.rs:808
      10: zookeeper::io::ZkIo::reconnect
                 at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:326
      11: zookeeper::io::ZkIo::ready_zk
                 at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:429
      12: zookeeper::io::ZkIo::ready
                 at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:366
      13: zookeeper::io::ZkIo::ready_timer
                 at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/io.rs:549
      14: zookeeper::zookeeper::ZooKeeper::connect::{{closure}}
                 at /Users/dtw/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.5.5/src/zookeeper.rs:78
    

    I believe this is due to the unwrap() call at this line: https://github.com/bonifaido/rust-zookeeper/blob/e25f2a0ee6cc2667430054f08c8c69fca1c8c4e9/src/io.rs#L326

    I also have a listener on the connection that right now just logs the state transitions of the client. I see the client go through the Connected -> NotConnected and NotConnected -> Connecting state transitions before the panic happens.

    In order to reproduce this behavior I've been using Docker to start and stop a local ZK server using the Docker Hub official Zookeeper Docker image. To run the server and expose a port, you can run docker run --rm -p 2181:2181 --name test-zookeeper -d zookeeper on a machine with docker installed.

    I could handle the disconnect from within my application by watching for the NotConnected event and taking action from there (either exiting the rest of the application or trying to rebuild the client) but I think it would be nice to resolve some of this from within the client library as well. It doesn't seem like the client's internal thread should panic, leaving the last client state event the caller receives to be Connecting.

    Two options that come to mind for handling this situation are:

    1. Instead of panicking, publish some sort client state indicating it is permanently failed/not connected. It looks like ZkState::Closed might already fit the situation and could potentially be published in this case.
    2. Add a bit more logic to the reconnect routine to continually retry or perhaps have a definable policy to try more times before entering into the state I describe in option one.

    What do you think about these options? Would you be amenable to a PR to at the least handle the case where the reconnect fails and we publish a ZkState::Closed event to the listeners?

    opened by davidwilemski 5
  • Build failed on ubuntu 14.04.1

    Build failed on ubuntu 14.04.1

    Build this crate failed on ubuntu 14.04.1:

    Host info:

    DISTRIB_ID=Ubuntu
    DISTRIB_RELEASE=14.04
    DISTRIB_CODENAME=trusty
    DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"
    

    Build command: cargo build --release --bin front_service --manifest-path ./core/Cargo.toml --verbose

    Build log:

       Compiling zookeeper v0.3.0
         Running `rustc --crate-name zookeeper /root/.cargo/registry/src/github.com-1ecc6299db9ec823/zookeeper-0.3.0/src/lib.rs --crate-type lib --emit=dep-info,link -C opt-level=3 -C metadata=18afff2b8efa19af -C extra-filename=-18afff2b8efa19af --out-dir /home/ubuntu/web_service/target/release/deps -L dependency=/home/ubuntu/web_service/target/release/deps --extern log=/home/ubuntu/web_service/target/release/deps/liblog-c73672fbee7ce8e5.rlib --extern mio=/home/ubuntu/web_service/target/release/deps/libmio-1e63d8e5040d2907.rlib --extern snowflake=/home/ubuntu/web_service/target/release/deps/libsnowflake-9e3c0622cb044461.rlib --extern lazy_static=/home/ubuntu/web_service/target/release/deps/liblazy_static-593470b8b1e9df82.rlib --extern zookeeper_derive=/home/ubuntu/web_service/target/release/deps/libzookeeper_derive-c12483d6b5c097d3.so --extern byteorder=/home/ubuntu/web_service/target/release/deps/libbyteorder-b7e41c93a912a264.rlib --extern bytes=/home/ubuntu/web_service/target/release/deps/libbytes-9e91f13708218df5.rlib --cap-lints allow`
    rustc: /checkout/src/llvm/lib/Analysis/ValueTracking.cpp:1594: void computeKnownBits(const llvm::Value*, llvm::APInt&, llvm::APInt&, unsigned int, const {anonymous}::Query&): Assertion `(KnownZero & KnownOne) == 0 && "Bits known to be one AND zero?"' failed.
    error: Could not compile `zookeeper`.
    
    opened by kamyuentse 5
  • Add LeaderLatch recipe

    Add LeaderLatch recipe

    Related to #17

    This PR adds the LeaderLatch to recipes/leader. It is a port of the Apache Curator LeaderLatch implemented here.

    It's certainly not a perfect port and so a few things worth noting:

    • I am no expert in Zookeeper, or distributed systems in general, so this could be horribly incorrect (but super keen to get some feedback and code review! :smile:)
    • In my very limited understanding of atomics and synchronization I have used Ordering::SeqCst as, if I'm not mistaken, that is how Java synchronizes atomic.
    • In the Java impl, atomic references are also used for strings but here we just use a Mutex (for path and subscription).
    • This impl doesn't (yet) handle when the client reconnects to ZK after it has disconnected. On a disconnect, it will setLeadership(false) but on a reconnect it should call LeaderLatch::reset.
    • The ZK docs talk about using a GUID (the id attribute) to help clients recover from errors, specifically when calling ZooKeeper::create. This isn't implemented, and given the latch znodes are ephemeral (and sequential) I'm not sure how useful it would be since clients could just start a new LeaderLatch - so might be something for later.
    • Parent znodes should be created with create_mode=container but ZooKeeperExt::ensure_path hardcodes the create_mode as persistent. I've just copied that ensure_path into recipes/leader, but might be nice to consolidate the two impls.

    I've also added some (very rough) tests and an example to demonstrate a leader being elected and, when the leader is killed, a follower becoming the new leader.

    This recipe definitely needs more tests, probably an explanation of the example, and more comprehensive logging and documentation (maybe part of #15 too) but wanted to put this up to start getting some feedback. And if it's merged before some of these improvements are in, I'm happy to follow up with some more PRs :+1:

    Thanks! :tada:

    opened by joshleeb 4
  • Bump dependency versions

    Bump dependency versions

    This bumps the syn dependency, which is pretty costly to have to double-compile in downstream crates, and env_logger which is common enough that keeping it up-to-date reduces bloat.

    opened by jonhoo 4
  • how to receiver event when node data change?

    how to receiver event when node data change?

    struct LoggingWatcher;
    impl Watcher for LoggingWatcher {
        fn handle(&self, e: WatchedEvent) {
            info!("{:?}", e)
        }
    }
    
    struct RootWatcher;
    impl Watcher for RootWatcher {
        fn handle(&self, e: WatchedEvent) {
            info!("Root->> {:?}", e)
        }
    }
    
    fn zk_example2() {
        let zk = ZooKeeper::connect("127.0.0.1/test", Duration::from_secs(15), LoggingWatcher).unwrap();
    
        zk.add_listener(|zk_state| println!("New ZkState is {:?}", zk_state));
    
        // how to receiver event when node data change?
        let data = zk.get_data_w("/", RootWatcher).unwrap();
    
        println!("press enter to close client");
        let mut tmp = String::new();
        io::stdin().read_line(&mut tmp).unwrap();
    }
    
    fn main(){
       zk_example2(); 
    }
    
    question 
    opened by isnlan 4
  • Bump more dependencies

    Bump more dependencies

    Note the change to the try_io traits. Since they are not actually public (and I've now marked them as such), this isn't a breaking change.

    It'd be great to issue a new point release with these new changes :+1:

    opened by jonhoo 3
  • Fix reconnection hangup

    Fix reconnection hangup

    • I believe that right after the start "ZkState" should be " Connecting"
    • If after connecting to the server to adjust the session timeout instead of ping, "ioloop" will stop more often, even with a valid session
    • When hang " Tcp Stream.connect", always occurred halt " io loop". I made an attempt to support tcp connection timeout via timers.

    still new-ish to rust and english :-(

    opened by sakateka 3
  • Incorrect maintenance of zxid

    Incorrect maintenance of zxid

    The code currently blindly adopts the zxid of each new received header:

    https://github.com/bonifaido/rust-zookeeper/blob/a43f686a27a91461a374f0d65c732930dfb36084/src/io.rs#L176

    However, as I recently discovered while implementing tokio-zookeeper, watch events actually yield a zxid response of -1! This means that, if you were to crash right after handling a watch event, and then reconnect, you would reconnect with zxid = -1, which is not the right value.

    It think the code needs to be changed so that it instead adopts the zxid only if the response is not a watcher event.

    opened by jonhoo 3
  • Implement Error for ZkError

    Implement Error for ZkError

    Closes #30

    Ok, here is a basic implementation. I'm happy to change this however y'all see fit. I'm still new to making the error classes and rust in general. Let the reviews begin!

    opened by drusellers 3
  • Infinite Loop

    Infinite Loop

    Hello!! When zookeeper is running the lib working good but when zookeeper is not running the lib response with a infinite loop: "ERROR:zookeeper::io: Failed to write socket: Error { repr: Os { code: 57, message: "Socket is not connected" } }" There any way to control and finish when this error occurs?

    Thanks in advance!!

    opened by f3r10 3
  • zookeeper client go to dead loop if connection port is not listen

    zookeeper client go to dead loop if connection port is not listen

    i write a demo as follow. if the port is not listened , Client try write socket in dead loop. `use zookeeper::{ZooKeeper, Watcher, WatchedEvent}; use std::time::Duration; use std::thread;

    struct LoggingWatcher; impl Watcher for LoggingWatcher { fn handle(&self, e: WatchedEvent) { println!("{:?}", e) } }

    fn main() { env_logger::init(); let url = "127.0.0.1:2081"; let zk = ZooKeeper::connect(url, Duration::from_secs(1), LoggingWatcher).unwrap();

    loop {
        thread::sleep(Duration::from_secs(5));
    }
    

    }`

    Screen print fail log in dead loop [2022-09-20T08:57:59Z ERROR zookeeper::io] Failed to write socket: Os { code: 32, kind: BrokenPipe, message: "Broken pipe" }

    opened by dongweifly 0
  • in leader_latch ,the zookeeper  can't drop after zookeeper network error

    in leader_latch ,the zookeeper can't drop after zookeeper network error

    leader.rs fn start

    let latch = self.clone();
    // this step ,while LeaderLatch has field zk,zookeeper has ref self let subscription = self.zk.add_listener(move |x| handle_state_change(&latch, x));

    leader.rs fn stop

       self.set_path(None)?;//if set_path failed, never remove_listener,so the Arc<ZooKeeper> never drop
       ......
       self.zk.remove_listener(sub);        
    
    opened by widefire 0
  • Upgrade to 2018 Edition

    Upgrade to 2018 Edition

    With the 2021 edition "coming soon", @bonifaido WDYT of upgrading to the 2018 edition to ease the future switch to the 2021 edition when released?

    opened by joshleeb 3
  • Error on drop after close

    Error on drop after close

    There seems to be an error that shows up when dropping ZooKeeper after previously closing the connection with ZooKeeper::close. I believe this is because ZooKeeper::drop will try to close the connection, and that will error out if it is already closed.

    At this stage nothing particularly bad happens other than an unnecessary attempt to make a request to ZK (to close the session), and an error that is logged.

    [<timestamp> ERROR zookeeper::zookeeper] error closing zookeeper connection in drop: ConnectionLoss
    
    opened by joshleeb 0
  • Connecting to ipv6 fails on Windows 10

    Connecting to ipv6 fails on Windows 10

    When trying to connect to a zk server using a ipv6 address, the connection fails with Failed to read socket: Os { code: 10049, kind: AddrNotAvailable.

    opened by krojew 1
Releases(0.7.0)
Owner
Nándor István Krácser
linkedin.com/in/nandorkracser/
Nándor István Krácser
Rust client for apache iotdb.

Apache IoTDB Apache IoTDB (Database for Internet of Things) is an IoT native database with high performance for data management and analysis, deployab

IoTDB Lab 7 Aug 4, 2022
A pure-Rust library to interact with systemd DBus services

A pure-Rust library to interact with systemd DBus services

Luca Bruno 10 Nov 23, 2022
🧰 The Rust SQL Toolkit. An async, pure Rust SQL crate featuring compile-time checked queries without a DSL. Supports PostgreSQL, MySQL, SQLite, and MSSQL.

SQLx ?? The Rust SQL Toolkit Install | Usage | Docs Built with ❤️ by The LaunchBadge team SQLx is an async, pure Rust† SQL crate featuring compile-tim

launchbadge 7.6k Dec 31, 2022
AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust

AgateDB is an embeddable, persistent and fast key-value (KV) database written in pure Rust. It is designed as an experimental engine for the TiKV project, and will bring aggressive optimizations for TiKV specifically.

TiKV Project 535 Jan 9, 2023
Klickhouse is a pure Rust SDK for working with Clickhouse

Klickhouse is a pure Rust SDK for working with Clickhouse with the native protocol in async environments with minimal boilerplate and maximal performance.

Max Bruce 44 Dec 27, 2022
Pure rust embeddable key-value store database.

MHdb is a pure Rust database implementation, based on dbm. See crate documentation. Changelog v1.0.3 Update Cargo.toml v1.0.2 Update Cargo.toml v1.0.1

Magnus Hirth 7 Dec 10, 2022
A pure Rust database implementation using an append-only B-Tree file format.

nebari nebari - noun - the surface roots that flare out from the base of a bonsai tree Warning: This crate is early in development. The format of the

Khonsu Labs 194 Jan 3, 2023
Lightweight async Redis client with connection pooling written in pure Rust and 100% memory safe

redi-rs (or redirs) redi-rs is a Lightweight Redis client with connection pooling written in Rust and 100% memory safe redi-rs is a Redis client writt

Oğuz Türkay 4 May 20, 2023
A prototype of a high-performance KV database built with Rust.

async-redis A prototype of a high-performance KV database built with Rust. Author: 3andero 11/10/2021 Overview The project starts as a fork of mini-re

null 3 Nov 29, 2022
X-Engine: A SQL Engine built from scratch in Rust.

XNGIN (pronounced "X Engine") This is a personal project to build a SQL engine from scratch. The project name is inspired by Nginx, which is a very po

Jiang Zhe 111 Dec 15, 2022
A tiny embedded database built in Rust.

TinyBase TinyBase is an in-memory database built with Rust, based on the sled embedded key-value store. It supports indexing and constraints, allowing

Josh Rudnik 8 May 27, 2023
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023
Fault-tolerant Async Actors Built on Tokio

Kameo ???? Fault-tolerant Async Actors Built on Tokio Async: Built on tokio, actors run asyncronously in their own isolated spawned tasks. Supervision

Ari Seyhun 135 Jul 25, 2024
CouchDB client-side library for the Rust programming language

Chill Chill is a client-side CouchDB library for the Rust programming language, available on crates.io. It targets Rust Stable. Chill's three chief de

null 35 Jun 26, 2022
An etcd client library for Rust.

etcd An etcd client library for Rust. etcd on crates.io Documentation for the latest crates.io release Running the tests Install Docker and Docker Com

Jimmy Cuadra 138 Dec 27, 2022
Redis library for rust

redis-rs Redis-rs is a high level redis library for Rust. It provides convenient access to all Redis functionality through a very flexible but low-lev

Armin Ronacher 2.8k Jan 8, 2023
Mysql client library implemented in rust.

mysql This crate offers: MySql database driver in pure rust; connection pool. Features: macOS, Windows and Linux support; TLS support via nativetls cr

Anatoly I 548 Dec 31, 2022
GlueSQL is a SQL database library written in Rust

GlueSQL is a SQL database library written in Rust. It provides a parser (sqlparser-rs), execution layer, and optional storage (sled) packaged into a single library.

GlueSQL 2.1k Jan 8, 2023
Skytable rust client support library for the bb8 connection pool

bb8-skytable Skytable rust client support library for the bb8 connection pool. Heavily based on bb8-redis Basic usage example use bb8_skytable::{

null 3 Sep 18, 2021