Rust client for Apache Kafka

Overview

Kafka Rust Client

Build Status

Project Status

This project is starting to be maintained by John Ward, the current status is that I am bringing the project up to date with the latest dependencies, removing deprecated Rust code and adjusting the tests.

New Home

Proposing to move the repo to a new home: https://github.com/kafka-rust

Documentation

Installation

This crate works with Cargo and is on crates.io. The API is currently under heavy movement although we do follow semantic versioning (but expect the version number to grow quickly).

[dependencies]
kafka = "0.8"

To build kafka-rust the usual cargo build should suffice. The crate supports various features which can be turned off at compile time. See kafka-rust's Cargo.toml and cargo's documentation.

Supported Kafka version

kafka-rust is tested for compatibility with Kafka 0.8.2 and newer. However, not all features from Kafka 0.9 and newer are supported yet.

Examples

As mentioned, the cargo generated documentation contains some examples. Further, standalone, compilable example programs are provided in the examples directory of the repository.

Consumer

This is a higher-level consumer API for Kafka and is provided by the module kafka::consumer. It provides convenient offset management support on behalf of a specified group. This is the API a client application of this library wants to use for receiving messages from Kafka.

Producer

This is a higher-level producer API for Kafka and is provided by the module kafka::producer. It provides convenient automatic partition assignment capabilities through partitioners. This is the API a client application of this library wants to use for sending messsages to Kafka.

KafkaClient

KafkaClient in the kafka::client module is the central point of this API. However, this is a mid-level abstraction for Kafka rather suitable for building higher-level APIs. Applications typically want to use the already mentioned Consumer and Producer. Nevertheless, the main features or KafkaClient are:

  • Loading metadata
  • Fetching topic offsets
  • Sending messages
  • Fetching messages
  • Committing a consumer group's offsets
  • Fetching a consumer group's offsets

Bugs / Features / Contributing

There's still a lot of room for improvement on kafka-rust. Not everything works right at the moment, and testing coverage could be better. Use it in production at your own risk. Have a look at the issue tracker and feel free to contribute by reporting new problems or contributing to existing ones. Any constructive feedback is warmly welcome!

As usually with open source, don't hesitate to fork the repo and submit a pull request if you see something to be changed. We'll be happy to see kafka-rust improving over time.

Integration tests

When working locally, the integration tests require that you must have Docker (1.10.0+) and docker-compose (1.6.0+) installed and run the tests via the included run-all-tests script in the tests directory. See the run-all-tests script itself for details on its usage.

Creating a topic

Note unless otherwise explicitly stated in the documentation, this library will ignore requests to topics which it doesn't know about. In particular it will not try to retrieve messages from non-existing/unknown topics. (This behavior is very likely to change in future version of this library.)

Given a local kafka server installation you can create topics with the following command (where kafka-topics.sh is part of the Kafka distribution):

kafka-topics.sh --topic my-topic --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1

See also Kafka's quickstart guide for more information.

Comments
  • Enable ssl

    Enable ssl

    This gets started on #50, #51. I've built what I believe to be the appropriate wrapper around the connections now I'm just trying to figure out the best way to expose it. Would love to hear your feedback @spicavigo.

    It was mostly inspired by: https://github.com/hyperium/hyper/blob/master/src/net.rs#L655-L664

    cc @erickt

    opened by Hoverbear 30
  • utils::{TopicPartitionOffset, PartitionOffset} have Result fields

    utils::{TopicPartitionOffset, PartitionOffset} have Result fields

    Resolves #128

    • [x] Make KafkaClient.fetch_group_(topic_)?offsets return a Result<HashMap<String, Vec<PartitionOffset>>>
    • [x] Make PartitionOffset.offset an i64 rather than Result<i64>
    • [x] Change KafkaClient::produce_messages to return a new struct, ProduceConfirm, rather than TopicPartitionOffset, and remove TopicPartitionOffset
    0.6.x 
    opened by dead10ck 17
  • Project needs maintainer

    Project needs maintainer

    Unfortunately, I have not had time the last few years to continue maintenance of this project, nor has @xitep or @spicavigo. Please post here if you are interested in becoming a maintainer.

    opened by dead10ck 13
  • Fetching group topic offsets does not work

    Fetching group topic offsets does not work

    I'm trying to use this lib to calculate the offset lag for a particular group ID. It seems able to fetch topic metadata though KafkaClient::fetch_offsets just fine, but when specifying a group with KafkaClient::fetch_group_topic_offsets, it returns an UnknownTopicOrPartition error in the resulting offset entry.

    fn main() {
        ...
        let kafka_config = config.kafka_config;
    
        let mut client = KafkaClient::new(kafka_config.broker_list.clone());
        client.load_metadata(&kafka_config.topics).unwrap();
    
        let offsets = client.fetch_offsets(&kafka_config.topics, FetchOffset::Latest)
            .unwrap();
        println!("{:#?}", offsets);
    
        for topic in kafka_config.topics {
            let group_offsets = client.fetch_group_topic_offsets(&kafka_config.group, &topic)
                .unwrap();
            println!("{:#?}", group_offsets);
        }
    }
    

    this prints out

    {
        "foo": [
            PartitionOffset {
                offset: Ok(
                    299905
                ),
                partition: 3
            },
            PartitionOffset {
                offset: Ok(
                    299905
                ),
                partition: 6
            },
        ...
    }
    [
        TopicPartitionOffset {
            offset: Err(
                Kafka(
                    UnknownTopicOrPartition
                )
            ),
            topic: "foo",
            partition: 3
        },
        TopicPartitionOffset {
            offset: Err(
                Kafka(
                    UnknownTopicOrPartition
                )
            ),
            topic: "foo",
            partition: 6
        },
        ...
    ]
    

    These are active Kafka (v0.9) topics that are known to be working with the official Java library, and have committed offsets. It's also worth mentioning that when I create a Consumer and poll messages on the same group ID, it does not find a committed offset for the group, even though it does have one--it falls back to the offset specified by the fallback option.

    bug 0.4.x 
    opened by dead10ck 13
  • error: multiple rlib candidates for `kafka` found

    error: multiple rlib candidates for `kafka` found

    I'm a beginner of kafka and rust, to use rust client as a consumer, I add the github example in src/main.rs

    extern crate kafka;
    use kafka::client::KafkaClient;
    fn main() {
        let mut client = KafkaClient::new(&vec!("localhost:9092".to_string()));
        client.load_metadata_all();
        // OR
        // client.load_metadata(&vec!("my-topic".to_string())); // Loads metadata for vector of topics
     }
    

    and add kafka = "*" in the cargo.toml dependencies

    then I run $ cargo build I have the following error, is there anyone can help me to solve this?

       Compiling kafka v0.1.6 (file:///Users/admin/myrustdemo/kafka-rust-master)
    src/main.rs:2:1: 2:20 error: multiple rlib candidates for `kafka` found
    src/main.rs:2 extern crate kafka;
                  ^~~~~~~~~~~~~~~~~~~
    src/main.rs:2:1: 2:20 note: candidate #1: /Users/admin/myrustdemo/kafka-rust-master/target/debug/libkafka.rlib
    src/main.rs:2 extern crate kafka;
                  ^~~~~~~~~~~~~~~~~~~
    src/main.rs:2:1: 2:20 note: candidate #2: /Users/admin/myrustdemo/kafka-rust-master/target/debug/deps/libkafka-73c7ae9da4235a7d.rlib
    src/main.rs:2 extern crate kafka;
                  ^~~~~~~~~~~~~~~~~~~
    src/main.rs:2:1: 2:20 error: can't find crate for `kafka`
    src/main.rs:2 extern crate kafka;
                  ^~~~~~~~~~~~~~~~~~~
    error: aborting due to 2 previous errors
    Could not compile `kafka`.
    
    opened by Lingling7 13
  • use `native-tls` instead of `openssl`

    use `native-tls` instead of `openssl`

    to solve #149

    there are some limitation:

    • it's hard to impl Clone for Error
    • native-tls only support to load CA in DER format and Cert/PKey in PKCS12 format, sfackler/rust-native-tls#27
    • native-tls doesn't support disable verification, sfackler/rust-native-tls#3
    • native-tls doesn't support set_cipher_list, sfackler/rust-native-tls#4
    opened by flier 12
  • producer: ensure distributing messages with the same key to the same partition

    producer: ensure distributing messages with the same key to the same partition

    The "key" - if available - of a message is typically used to derive a consistent partition which a message should be delivered to. Usually one computes a hash code of the key and maps that consistently on one of the partitions of the topic.

    Clients of the Producer API can achieve this already by supplying a custom Partitioner to the Producer. However, we shall make this the standard behaviour of the DefaultPartitioner.

    Unfortunately, topic metadata held by the Producer (which is supplied by KafkaClient) deliver only "available" partitions, which makes it impossible to implement the "consistency" aspect of the requirement. Therefore, this will require an extension to KafkaClient as well.

    Regarding the hash algorithm we don't need a cryptographic hash function. However, we need to use one which doesn't different depending the platform - this rules out farmhash for example. (it seems the java based producer uses murmur2-32bit, but i don't see the need to use the same since various kafka clients seem not to stick a "default" hash function, e.g. the java based producer uses a different hash function than the scala based one.)

    feature 0.3.x 
    opened by xitep 12
  • consumer: allow consuming multiple topics

    consumer: allow consuming multiple topics

    I'm trying to use a Rust consumer to read from multiple topics. This is the code I have now:

    extern crate kafka;
    use kafka::client::KafkaClient;
    use kafka::consumer::Consumer;
    use kafka::utils;
    fn main(){
        let mut client = KafkaClient::new(vec!("localhost:9092".to_owned()));
        let res = client.load_metadata_all();
        let topics = client.topic_partitions.keys().cloned().collect(); 
        let offsets = client.fetch_offsets(topics, -1);
        for topic in &topics {
        let mut con = Consumer::new(client, "test-consumer-group".to_owned(), "topic".to_owned()).partition(0);
        let mut messagedata = 0;
        for msg in con {
            println!("{}", str::from_utf8(&msg.message).unwrap().to_string());
        }
      }
    }
    
    

    below is the error:

        src/main.rs:201:19: 201:25 error: use of moved value: `topics` [E0382]
    src/main.rs:201     for topic in &topics {
                                      ^~~~~~
        note: in expansion of for loop expansion
        src/main.rs:201:5: 210:6 note: expansion site
        src/main.rs:167:40: 167:46 note: `topics` moved here because it has type `collections::vec::Vec<collections::string::String>`, which is non-copyable
        src/main.rs:167     let offsets = client.fetch_offsets(topics, -1);
                                                           ^~~~~~
        src/main.rs:203:37: 203:43 error: use of moved value: `client` [E0382]
        src/main.rs:203     let mut con = Consumer::new(client, "test-consumer-group".to_owned(), "topicname".to_owned()).partition(0);
                                                        ^~~~~~
        note: in expansion of for loop expansion
        src/main.rs:201:5: 210:6 note: expansion site
        note: `client` was previously moved here because it has type     `kafka::client::KafkaClient`, which is non-copyable
        error: aborting due to 2 previous errors
    

    To better explain my question, here is my partial workable code for just one topic:

    let mut con = Consumer::new(client, "test-consumer-group".to_owned(), "testtopic".to_owned()).partition(0);
    
    for msg in con {
        println!("{}", str::from_utf8(&msg.message).unwrap().to_string());
    }
    

    And I tested the fetch_message function, it works for multiple topics, but the result I have (msgs) is Topicmessage, I don't know how to get message from Topicmessage.

    let msgs = client.fetch_messages_multi(vec!(utils::TopicPartitionOffset{
                                                topic: "topic1".to_string(),
                                                partition: 0,
                                                offset: 0 //from the begining
                                                },
                                            utils::TopicPartitionOffset{
                                                topic: "topic2".to_string(),
                                                partition: 0,
                                                offset: 0
                                            },
                                            utils::TopicPartitionOffset{
                                                topic: "topic3".to_string(),
                                                partition: 0,
                                                offset: 0
                                            }));
    for msg in msgs{
        println!("{}", msg);
    }
    
    feature 0.4.x 
    opened by Lingling7 10
  • utils::{TopicPartitionOffset, PartitionOffset} have Result fields

    utils::{TopicPartitionOffset, PartitionOffset} have Result fields

    Currently, the utils::{TopicPartitionOffset, PartitionOffset} structs' offset fields are Results. In my opinion, it's kind of strange to have a struct with a Result in the middle of it for anything but maybe custom Error types. It can make error handling on the user-side really inconvenient.

    Additionally, it's not entirely clear to me why TopicPartitionOffset is needed at all—it seems that it exists only to flatten responses, which leads to lots of inefficient cloning of data that it already had. I think in an instance like the linked one, it would be more appropriate to return a map which maps topics to PartitionOffsets; that way, only a move of one string is necessary, rather than a clone for every single offset response.

    What do you think? I tried to start by changing the Results to just their underlying type, but I can see it's a non-trivial refactor, so I thought I'd get your input first.

    enhancement 0.6.x 
    opened by dead10ck 9
  • Building on a system without installed Snappy

    Building on a system without installed Snappy

    Hey there,

    I'm working on a Heroku application that will be utilizing Kafka and am trying to build this library. As part of this I have a script which builds the libsnappy.a file by make installing Snappy into $CACHE_DIR/snappy-build/lib, here is the script:

    #!/usr/bin/env bash
    # bin/compile <build-dir> <cache-dir> <env-dir>
    
    set -e
    set -o pipefail
    
    # Build related variables.
    BUILD_DIR=$1
    CACHE_DIR=$2
    ENV_DIR=$3
    
    if [[ ! -d "$CACHE_DIR/snappy" ]]; then
        mkdir -p $CACHE_DIR
        cd $CACHE_DIR
        git clone https://github.com/google/snappy
        cd snappy
        ./autogen.sh
        ./configure --prefix="$(pwd)/../snappy-build"
        make
        make install
    fi
    
    export LD_LIBRARY_PATH="$CACHE_DIR/snappy-build/lib"
    export LD_RUN_PATH="$CACHE_DIR/snappy-build/lib"
    

    However whenever I try to push my application the kafka-rust library tries to link it incorrectly. See:

    src/main.rs:5:5: 5:31 warning: unused import, #[warn(unused_imports)] on by default
    src/main.rs:5 use kafka::client::KafkaClient;
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~
    error: linking with `cc` failed: exit code: 1
    note: "cc" "-Wl,--as-needed" "-m64" "-L" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib" "/source/target/debug/kafka_rust_test.0.o" "-o" "/source/target/debug/kafka_rust_test" "-Wl,--gc-sections" "-pie" "-nodefaultlibs" "-L" "/source/target/debug" "-L" "/source/target/debug/deps" "-L" "/source/target/debug/build/miniz-sys-d03126dbc9ee0074/out" "-L" "/source/target/debug/build/openssl-bade188098f75a08/out" "-L" "/usr/lib" "-L" "/source/target/debug/build/openssl-sys-extras-4daf08572eefce4c/out" "-L" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-Wl,-Bstatic" "-Wl,-Bdynamic" "/source/target/debug/deps/libkafka-9ca67eb32e4e1893.rlib" "/source/target/debug/deps/libiron-4fdc411e735a87b7.rlib" "/source/target/debug/deps/libmodifier-550b2f0e350e2963.rlib" "/source/target/debug/deps/liblibc-dd3420cb049117bb.rlib" "/source/target/debug/deps/libhyper-6df9e9da10e4a36f.rlib" "/source/target/debug/deps/libunicase-4ad2965620fe21a9.rlib" "/source/target/debug/deps/libsolicit-8632432b3a4330d6.rlib" "/source/target/debug/deps/libhttparse-5c82294627258d33.rlib" "/source/target/debug/deps/libhpack-8f91a695370f3d75.rlib" "/source/target/debug/deps/libplugin-99f6a9d43520a61a.rlib" "/source/target/debug/deps/liberror-d27f5056e37dd662.rlib" "/source/target/debug/deps/liblanguage_tags-9df4f269b9ee12d4.rlib" "/source/target/debug/deps/libnum_cpus-9a6b3f359403ec12.rlib" "/source/target/debug/deps/libbyteorder-3945c3ad3e0e1d9c.rlib" "/source/target/debug/deps/libtraitobject-4ea485452a3a4a0b.rlib" "/source/target/debug/deps/libtypemap-487ec1564575dd9b.rlib" "/source/target/debug/deps/libconduit_mime_types-8c1fe30d92f8233a.rlib" "/source/target/debug/deps/libmime-f5540d134e188f4e.rlib" "/source/target/debug/deps/liblog-87d547eff707fc8e.rlib" "/source/target/debug/deps/libcookie-9ec7d33888fc3f77.rlib" "/source/target/debug/deps/liburl-b7bb31aacec501cd.rlib" "/source/target/debug/deps/libuuid-fed17b74aa7673e2.rlib" "/source/target/debug/deps/libmatches-737aa40e66529b02.rlib" "/source/target/debug/deps/libopenssl-bade188098f75a08.rlib" "/source/target/debug/deps/liblazy_static-007034d2ad8108ce.rlib" "/source/target/debug/deps/libbitflags-646076c1f4684754.rlib" "/source/target/debug/deps/libopenssl_sys_extras-4daf08572eefce4c.rlib" "/source/target/debug/deps/libopenssl_sys-5ec15f2e42328238.rlib" "/source/target/debug/deps/libflate2-1e8d03096e238ad4.rlib" "/source/target/debug/deps/libminiz_sys-d03126dbc9ee0074.rlib" "/source/target/debug/deps/libserde-a8734c231f4c36ee.rlib" "/source/target/debug/deps/libnum-02177b937f857300.rlib" "/source/target/debug/deps/librand-340832a8942cb900.rlib" "/source/target/debug/deps/librustc_serialize-3c33cb2a40992011.rlib" "/source/target/debug/deps/libtypeable-7ddee84661471c9b.rlib" "/source/target/debug/deps/libtime-22c21fe32894ddad.rlib" "/source/target/debug/deps/liblibc-adb8b8e7aaa2f93f.rlib" "/source/target/debug/deps/libunsafe_any-081a220f4ebf6660.rlib" "/source/target/debug/deps/libtraitobject-a729e3ae66a1ef57.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-17a8ccbd.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcollections-17a8ccbd.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_unicode-17a8ccbd.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/librand-17a8ccbd.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc-17a8ccbd.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc_jemalloc-17a8ccbd.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/liblibc-17a8ccbd.rlib" "/root/.multirust/toolchains/nightly/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore-17a8ccbd.rlib" "-l" "snappy" "-l" "c" "-l" "m" "-l" "ssl" "-l" "crypto" "-l" "dl" "-l" "pthread" "-l" "gcc_s" "-l" "pthread" "-l" "c" "-l" "m" "-l" "rt" "-fuse-ld=gold" "-l" "compiler-rt"
    note: /usr/sbin/ld.gold: error: cannot find -lsnappy
    

    Could you perhaps offer some guidance? I've read through the build.rs script in this repo and have been trying to convince it unsuccessfully.

    bug 
    opened by Hoverbear 9
  • more than one consumer in the same group receive the message?

    more than one consumer in the same group receive the message?

    I create a topic with 1 partition, and use the following code to create some consumers:

    Consumer::from_hosts(brokers)
        .with_topic("test".to_owned())
        .with_group("group-recognizer".to_owned())
        .with_fallback_offset(FetchOffset::Latest)
        .create()
        .unwrap()
    

    then use the following code to listen the topic's message:

    for ms in consumer.poll().unwrap().iter() {
        for m in ms.messages() {
            println!("{:?}", m);
        }
        consumer.consume_messageset(ms);
    }
    consumer.commit_consumed().unwrap();
    

    Also, I notice that no output like the following when those consumers start:

    [2017-06-21 00:06:55,868] INFO [GroupCoordinator 0]: Assignment received from leader for group group-recognizer for generation 4 (kafka.coordinator.GroupCoordinator)
    

    which will be generated when using kafka-console-consumer.sh with the same group.id in properties file.

    Edit: When add .with_offset_storage(GroupOffsetStorage::Kafka) to Consumer builder, the program panic on when receiving the second message:

    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Kafka(UnknownMemberId)', src/libcore/result.rs:860
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    
    opened by kamyuentse 8
  • Consumers of the same group consuming same partitions and getting same messages

    Consumers of the same group consuming same partitions and getting same messages

    I am using the kafka library and have multiple replicas of my consumers to ensure HA. However, I have noticed that even that I specify the consumer group and the client id, that the replicas are getting the same messages from kafka.

    After logging the consumer partitions, I noticed that they are the same.

    Therefore, I just want to confirm if it is possible to have multiple consumers for the same topic, consuming messages from different partitions without I manually specifying which partition should be consumed. My goal is to avoid the same messages to be processed multiple times by different consumers.

    opened by juliorenner 1
  • Protocol Rework

    Protocol Rework

    After reading through the protocol code, I thought it needed a bit of a rework - there is room for improvement (remove unsafe code, more idiomatic rust, and better performance)

    I played around a bunch with alternative designs for better performance (notably for decoding) but it may be too complicated to adopt one of those approaches right now. Essentially, there should be custom read that allows to retrieve the bytes that are being read, which would minimize allocations and copies (see bincode).

    For now I will try to implement something that works and that fully implements the protocol using 0 unsafe code and idiomatic Rust, I will look into the custom Read later on

    opened by midnightexigent 0
  • Added Kafka 3.1.0 to integration tests and changed misc error handling

    Added Kafka 3.1.0 to integration tests and changed misc error handling

    Integration Tests

    • Added Kafka 3.1.0
    • Updated readme to reflect versions tested
    • Simplified Dockerfile
    • Added set -e to potentially catch other errors

    Dependencies

    • Switched logs to use 'tracing' crate instead
    • Removed 'log' and 'env_logger' dependencies
    • Added 'tracing-subscriber' dependency to replace 'env_logger'

    Core

    • Made offset storage optional
    • Error during commit if offset storage or group id is empty

    Examples (esp. console-consumer)

    • Added --follow option
    • Changed '--no-commit' to '--commit'
    • Default to 'GroupOffsetStorage=None'
    • etc.
    opened by r3h0 2
Releases(v0.6.1)
  • v0.6.1(May 26, 2017)

  • v0.6.0(Apr 28, 2017)

    • Update to latest rust-openssl (#122; many thanks to @flier)
    • Replace native snappy dependency with rust-snappy (#116; again many thanks to @flier)
    • Performance improvements by avoiding needless cloning during fetching offsets and API improvements around fetching group offsets (#128; many many thanks to @dead10ck)
    • Replace SipHasher with XxHash (#145)
    • Added Consumer#subscriptions to inspect the consumed topic-partitions (#137)
    • Started maintaining integration tests (#143; kudos to @dead10ck)
    • Allow borrowing KafkaClient from Consumer/Producer (#136; contributed by @thijsc)
    • Don't send client-id by default; but allow specifying one (#140; contributed by @Morsicus)
    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Aug 7, 2016)

  • v0.4.2(Aug 4, 2016)

  • v0.4.1(Jul 14, 2016)

  • v0.4.0(Jul 12, 2016)

    • Kafka group offset storage support (#95)
    • Allow consumer to read topics/partitions without a group (breaking api change) (#105)
    • Support for reading multiple topics through consumer (breaking api change) (#31)
    • Type safe required-acks api for sending messages (breaking api change) (#97)
    • Consumer#fallback_offset defaults to "latest" now (#106)
    • Update dependency on crc 1.3 (#98)
    • Properly evaluate compression type for fetched messages (#96)
    Source code(tar.gz)
    Source code(zip)
  • v0.3.3(Jun 16, 2016)

  • v0.3.2(Jun 5, 2016)

    Introduce features to exclude third-party dependencies:

    • "snappy" to exclude dependency on the snappy crate
    • "gzip" to exclude dependency on the flate2 crate
    • "security" to exclude dependency on the openssl crate
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(May 19, 2016)

  • v0.3.0(Mar 31, 2016)

    This release only slightly changes the public API of kafka-rust compared to 0.2.0.

    • Producer will now dispatch messages with the same key to the same partition.
    • One public unsafe method has been dropped from the fetch response structure..
    • KafkaClient::fetch_messages guarantees to deliver messages with message.offset >= partition.requested-offset (this guarantee was previously available only through Consumer.)
    • KafkaClient::fetch_messages and Consumer will now validate the CRC checksum of fetched messages by default. Since this involves some non-trivial overhead, CRC validation can be turned off.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Feb 16, 2016)

    This release is generally incompatible to 0.1 due to renaming of types and movement of functionality out of KafkaClient to Consumer and (the new) Producer APIs. If you are dependent on kafka-rust be sure to have a explicit dependency on the 0.1 version if you cannot upgrade right now.

    Some highlights of this release are:

    • Make KafkaClient API to accept references where possible (to avoid the need for clones at the caller side)
    • Drop kafka::utils from public view; all of the types have been moved to kafka::client and some of them renamed to match their purpose
    • Avoid cloning when fetching messages (huge performance win when fetching uncompressed messages)
    • Correctly consume compressed messages
    • Expose messages basically as a key/value pair (in both fetching and sending APIs)
    • Allow client side to control the offset commitment in Consumer API
    • Avoid Consumer to clone delivered messages; Consumer no longer implements Iterator
    • Sending messages through KafkaClient no longer automatically distributes them to different partitions
    • Introduce Producer API for sending messages to Kafka
    • Separate out snappy code to snappy crate
    • Make KafkaClient Sendable (to allow sending it after initialization to another thread)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.8(Nov 12, 2015)

  • v0.1.7(Nov 12, 2015)

    • Snappy/Gzip compression when sending messages - @jedisct1
    • Handling partial trailing messages - @frew
    • Allow sending multiple messages to the same topic-partition in one call - @jedisct1
    • Internal re-factorings and small performance improvements - @jedisct1 @xitep
    • Allow builds against custom libsnappy installations - @winding-lines
    • Fallback offset for consumer - @xitep
    Source code(tar.gz)
    Source code(zip)
  • v0.1.6(May 25, 2015)

  • v0.1.5(May 20, 2015)

  • v0.1.4(May 19, 2015)

    Fetch offset, fetch message and send message had been implemented for single arguments. This release provides methods for multiple arguments.

    For eg:

    fetch message can take either

    1. topic, partition, offset OR
    2. Vector of TopicPartitionOffset (a wrapper on above 3)

    Similar changes for the other two methods too.

    I have added methods for offset management but these are yet to be thoroughly tested.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(May 11, 2015)

  • v0.1.1(May 9, 2015)

  • v0.1.0(May 7, 2015)

Owner
Yousuf Fauzan
Yousuf Fauzan
A highly efficient daemon for streaming data from Kafka into Delta Lake

kafka-delta-ingest The kafka-delta-ingest project aims to build a highly efficient daemon for streaming data through Apache Kafka into Delta Lake. Thi

Delta Lake 173 Dec 28, 2022
Easy-to-use beanstalkd client for Rust (IronMQ compatible)

rust-beanstalkd Easy-to-use beanstalkd client for Rust (IronMQ compatible) Install Add this dependency to your Cargo.toml beanstalkd = "*" Documentati

Johannes Schickling 44 Oct 4, 2022
Easy Hadoop Streaming and MapReduce interfaces in Rust

Efflux Efflux is a set of Rust interfaces for MapReduce and Hadoop Streaming. It enables Rust developers to run batch jobs on Hadoop infrastructure wh

Isaac Whitfield 31 Nov 22, 2022
libhdfs binding and wrapper APIs for Rust

hdfs-rs libhdfs binding library and rust APIs which safely wraps libhdfs binding APIs Current Status Alpha Status (Rust wrapping APIs can be changed)

Hyunsik Choi 32 Dec 1, 2022
Fluvio is a high-performance distributed streaming platform that's written in Rust

Fluvio is a high-performance distributed streaming platform that's written in Rust, built to make it easy to develop real-time applications.

InfinyOn 1.6k Dec 30, 2022
Magical Automatic Deterministic Simulator for distributed systems in Rust.

MadSim Magical Automatic Deterministic Simulator for distributed systems. Deterministic simulation MadSim is a Rust async runtime similar to tokio, bu

MadSys Research Group 249 Dec 28, 2022
The Raft algorithm implement by Rust.

Raft The Raft algorithm implement by Rust. This project refers to Eli Bendersky's website, the link as follows: https://eli.thegreenplace.net/2020/imp

Qiang Zhao 1 Oct 23, 2021
Raft distributed consensus for WebAssembly in Rust

WRaft: Raft in WebAssembly What is this? A toy implementation of the Raft Consensus Algorithm for WebAssembly, written in Rust. Basically, it synchron

Emanuel Evans 60 Oct 22, 2022
Paxakos is a pure Rust implementation of a distributed consensus algorithm

Paxakos is a pure Rust implementation of a distributed consensus algorithm based on Leslie Lamport's Paxos. It enables distributed systems to consistently modify shared state across their network, even in the presence of failures.

Pavan Ananth Sharma 2 Jul 5, 2022
Rust client for Apache Kafka

Kafka Rust Client Project Status This project is starting to be maintained by John Ward, the current status is that I am bringing the project up to da

Kafka Rust 900 Dec 26, 2022
librdkafka - the Apache Kafka C/C++ client library

librdkafka - the Apache Kafka C/C++ client library Copyright (c) 2012-2020, Magnus Edenhill. https://github.com/edenhill/librdkafka librdkafka is a C

Magnus Edenhill 6.4k Dec 31, 2022
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
A fully asynchronous, futures-based Kafka client library for Rust based on librdkafka

rust-rdkafka A fully asynchronous, futures-enabled Apache Kafka client library for Rust based on librdkafka. The library rust-rdkafka provides a safe

Federico Giraud 1.1k Jan 8, 2023
Rust client for apache iotdb.

Apache IoTDB Apache IoTDB (Database for Internet of Things) is an IoT native database with high performance for data management and analysis, deployab

Mark Liu 7 Aug 4, 2022
Rust client for apache iotdb.

Apache IoTDB Apache IoTDB (Database for Internet of Things) is an IoT native database with high performance for data management and analysis, deployab

IoTDB Lab 7 Aug 4, 2022
📺 Netflix in Rust/ React-TS/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Kafka, Redis, Tokio, Actix, Elasticsearch, Influxdb Iox, Tensorflow, AWS

Fullstack Movie Streaming Platform ?? Netflix in RUST/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Spark, Kafka, Redis,

null 34 Apr 17, 2023
A highly efficient daemon for streaming data from Kafka into Delta Lake

kafka-delta-ingest The kafka-delta-ingest project aims to build a highly efficient daemon for streaming data through Apache Kafka into Delta Lake. Thi

Delta Lake 173 Dec 28, 2022
A highly efficient daemon for streaming data from Kafka into Delta Lake

A highly efficient daemon for streaming data from Kafka into Delta Lake

Delta Lake 172 Dec 23, 2022
Provides a Suricata Eve output for Kafka with Suricate Eve plugin

Suricata Eve Kafka Output Plugin for Suricata 6.0.x This plugin provides a Suricata Eve output for Kafka. Base on suricata-redis-output: https://githu

Center 7 Dec 15, 2022
Devops kafka like ls, tail, head,and echo with kls, ktail, khead and kecho

Kafka CLI Tools DevOps topics of Message Queue(eg kafka, pulsar, rabbitmq) like ls, echo, head and tail with kls, kecho, khead and ktail Getting Start

imotai 4 Dec 31, 2021