A Rust client for the ElasticSearch REST API

Overview

rs-es

Build Status

Introduction

An ElasticSearch client for Rust via the REST API. Targetting ElasticSearch 2.0 and higher.

Other clients

For later versions of ElasticSearch you probably want the official client.

Documentation

Full documentation for rs-es.

Building and installation

Version 0.12.0 and higher have been tested with the prevailing "stable", "beta" and "nightly" versions of rustc at the time of their release. It should also work correctly with earlier versions, however some dependencies may use or require language features that are only available in recent versions of rustc.

crates.io

Available from crates.io.

ElasticSearch compatibility

The default version of ElasticSearch supported is 2.0. Higher versions will also work as long as the particular part of the ES API is compatible with the version 2 spec.

Newer versions of ElasticSearch do have some incompatibilities in some areas, therefore these are not supported by this library.

However, starting with version 0.12.1 there is experimental support for ES 5 using the es5 feature flag. The intention is this support will become more complete over time and will become the new baseline supported compatible version.

Design goals

There are two primary goals: 1) to be a full implementation of the ElasticSearch REST API, and 2) to be idiomatic both with ElasticSearch and Rust conventions.

The second goal is more difficult to achieve than the first as there are some areas which conflict. A small example of this is the word type, this is a word that refers to the type of an ElasticSearch document but it also a reserved word for definining types in Rust. This means we cannot name a field type for instance, so in this library the document type is always referred to as doc_type instead.

Usage guide

The client

The Client wraps a single HTTP connection to a specified ElasticSearch host/port.

(At present there is no connection pooling, each client has one connection; if you need multiple connections you will need multiple clients. This may change in the future).

use rs_es::Client;

let mut client = Client::init("http://localhost:9200");

Operations

The Client provides various operations, which are analogous to the various ElasticSearch APIs.

In each case the Client has a function which returns a builder-pattern object that allows additional options to be set. The function itself will require mandatory parameters, everything else is on the builder (e.g. operations that require an index to be specified will have index as a parameter on the function itself).

An example of optional parameters is routing. The routing parameter can be set on operations that support it with:

op.with_routing("user123")

See the ElasticSearch guide for the full set of options and what they mean.

index

An implementation of the Index API.

let index_op = client.index("index_name", "type_name");

Returned is an IndexOperation to add additional options. For example, to set an ID and a TTL:

index_op.with_id("ID_VALUE").with_ttl("100d");

The document to be indexed has to implement the Serialize trait from the serde library. This can be achieved by either implementing or deriving that on a custom type, or by manually creating a Value object.

Calling send submits the index operation and returns an IndexResult:

index_op.with_doc(&document).send();

get

An implementation of the Get API.

Index and ID are mandatory, but type is optional. Some examples:

// Finds a document of any type with the given ID
let result_1 = client.get("index_name", "ID_VALUE").send();

// Finds a document of a specific type with the given ID
let result_2 = client.get("index_name", "ID_VALUE").with_doc_type("type_name").send();

delete

An implementation of the Delete API.

Index, type and ID are mandatory.

let result = client.delete("index_name", "type_name", "ID_VALUE").send();

refresh

Sends a refresh request.

use rs_es::Client;

let mut client = Client::init("http://localhost:9200").expect("connection failed");
// To everything
let result = client.refresh().send();

// To specific indexes
let result = client.refresh().with_indexes(&["index_name", "other_index_name"]).send();

search_uri

An implementation of the Search API using query strings.

Example:

use rs_es::Client;

let mut client = Client::init("http://localhost:9200").expect("connection failed");
let result = client.search_uri()
                   .with_indexes(&["index_name"])
                   .with_query("field:value")
                   .send::<String>();

search_query

An implementation of the Search API using the Query DSL.

use rs_es::Client;
use rs_es::query::Query;

let mut client = Client::init("http://localhost:9200").expect("connection failed");
let result = client.search_query()
                   .with_indexes(&["index_name"])
                   .with_query(&Query::build_match("field", "value").build())
                   .send::<String>();

A search query also supports scan and scroll, sorting, and aggregations.

count_uri

An implementation of the Count API using query strings.

Example:

use rs_es::Client;

let mut client = Client::init("http://localhost:9200").expect("connection failed");
let result = client.count_uri()
                   .with_indexes(&["index_name"])
                   .with_query("field:value")
                   .send();

count_query

An implementation of the Count API using the Query DSL.

use rs_es::Client;
use rs_es::query::Query;

let mut client = Client::init("http://localhost:9200").expect("connection failed");
let result = client.count_query()
                   .with_indexes(&["index_name"])
                   .with_query(&Query::build_match("field", "value").build())
                   .send();

bulk

An implementation of the Bulk API. This is the preferred way of indexing (or deleting, when Delete-by-Query is removed) many documents.

use rs_es::operations::bulk::Action;

let result = client.bulk(&vec![Action::index(document1),
                               Action::index(document2).with_id("id")]);

In this case the document can be anything that implements ToJson.

Sorting

Sorting is supported on all forms of search (by query or by URI), and related operations (e.g. scan and scroll).

use rs_es::Client;
use rs_es::query::Query;
use rs_es::operations::search::{Order, Sort, SortBy, SortField};

let mut client = Client::init("http://localhost:9200").expect("connection failed");
let result = client.search_query()
                   .with_query(&Query::build_match_all().build())
                   .with_sort(&Sort::new(vec![
		       SortBy::Field(SortField::new("fieldname", Some(Order::Desc)))
		   ]))
                   .send::<String>();

This is quite unwieldy for simple cases, although it does support the more exotic combinations that ElasticSearch supports; so there are also a number of convenience functions for the more simple cases, e.g. sorting by a field in ascending order:

// Omitted the rest of the query
.with_sort(&Sort::field("fieldname"))

Results

Each of the defined operations above returns a result. Specifically this is a struct that is a direct mapping to the JSON that ElasticSearch returns.

One of the most common return types is that from the search operations, this too mirrors the JSON that ElasticSearch returns. The top-level contains two fields, shards returns counts of successful/failed operations per shard, and hits contains the search results. These results are in the form of another struct that has two fields total the total number of matching results; and hits which is a vector of individual results.

The individual results contain meta-data for each hit (such as the score) as well as the source document (unless the query set the various options which would disable or alter this).

The type of the source document can be anything that implemented Deserialize. ElasticSearch search may return many different types of document, it also doesn't (by default) enforce any schema, this together means the structure of a returned document may need to be validated before being deserialised. In this case a search result can return a Value from that data can be extracted and/or converted to other structures.

The Query DSL

ElasticSearch offers a rich DSL for searches. It is JSON based, and therefore very easy to use and composable if using from a dynamic language (e.g. Ruby); but Rust, being a staticly-typed language, things are different. The rs_es::query module defines a set of builder objects which can be similarly composed to the same ends.

For example:

use rs_es::query::Query;

let query = Query::build_bool()
    .with_must(vec![Query::build_term("field_a",
                                      "value").build(),
                    Query::build_range("field_b")
                          .with_gte(5)
                          .with_lt(10)
                          .build()])
    .build();

The resulting Query value can be used in the various search/query functions exposed by the client.

The implementation makes much use of conversion traits which are used to keep a lid on the verbosity of using such a builder pattern.

Scan and scroll

When working with large result sets that need to be loaded from an ElasticSearch query, the most efficient way is to use scan and scroll. This is preferred to simple pagination by setting the from option in a search as it will keep resources open server-side allowing the next page to literally carry-on from where it was, rather than having to execute additional queries. The downside to this is that it does require more memory/open file-handles on the server, which could go wrong if there were many un-finished scrolls; for this reason, ElasticSearch recommends a short time-out for such operations, after which it will close all resources whether the client has finished or not, the client is responsible to fetch the next page within the time-out.

To use scan and scroll, begin with a search query request, but instead of calling send call scan:

let scan = client.search_query()
                 .with_indexes(&["index_name"])
                 .with_query(Query::build_match("field", "value").build())
                 .scan(Duration::minutes(1))
                 .unwrap();

(Disclaimer: any use of unwrap in this or other example is for the purposes of brevity, obviously real code should handle errors in accordance to the needs of the application.)

Then scroll can be called multiple times to fetch each page. Finally close will tell ElasticSearch the scan has finished and it can close any open resources.

let first_page = scan.scroll(&mut client);
// omitted - calls of subsequent pages
scan.close(&mut client).unwrap();

The result of the call to scan does not include a reference to the client, hence the need to pass in a reference to the client in subsequent calls. The advantage of this is that that same client could be used for actions based on each scroll.

Scan and scroll with an iterator

Also supported is an iterator which will scroll through a scan.

let scan_iter = scan.iter(&mut client);

The iterator will include a mutable reference to the client, so the same client cannot be used concurrently. However the iterator will automatically call close when it is dropped, this is so the consumer of such an iterator can use iterator functions like take or take_while without having to decide when to call close.

The type of each value returned from the iterator is Result<SearchHitsHitsResult, EsError>. If an error is returned than it must be assumed the iterator is closed. The type SearchHitsHitsResult is the same as returned in a normal search (the verbose name is intended to mirror the structure of JSON returned by ElasticSearch).

Aggregations

Experimental support for aggregations is also supported.

client.search_query().with_indexes(&[index_name]).with_aggs(&aggs).send();

Where aggs is a rs_es::operations::search::aggregations::Aggregations, for convenience sake conversion traits are implemented for common patterns; specifically the tuple (&str, Aggregation) for a single aggregation, and Vec<(&str, Aggregation)> for multiple aggregations.

Bucket aggregations (i.e. those that define a bucket that can contain sub-aggregations) can also be specified as a tuple (Aggregation, Aggregations).

use rs_es::operations::search::aggregations::Aggregations;
use rs_es::operations::search::aggregations::bucket::{Order, OrderKey, Terms};
use rs_es::operations::search::aggregations::metrics::Min;

let aggs = Aggregations::from(("str",
                               (Terms::field("str_field").with_order(Order::asc(OrderKey::Term)),
                                Aggregations::from(("int",
                                                    Min::field("int_field"))))));

The above would, when used within a search_query operation, generate a JSON fragment within the search request:

"str": {
    "terms": {
        "field": "str_field",
        "order": {"_term": "asc"}
    },
    "aggs": {
        "int": {
            "field": "int_field"
        }
    }
}

The majority, but not all aggregations are currently supported. See the documentation of the aggregations package for details.

For example, to get the a reference to the result of the Terms aggregation called str (see above):

let terms_result = result.aggs_ref()
    .unwrap()
    .get("str")
    .unwrap()
    .as_terms()
    .unwrap()

EXPERIMENTAL: the structure of results may change as it currently feels quite cumbersome.

Unimplemented features

The ElasticSearch API is made-up of a large number of smaller APIs, the vast majority of which are not yet implemented, although the most frequently used ones (searching, indexing, etc.) are.

Some, non-exhaustive, specific TODOs

  1. Add a CONTRIBUTING.md
  2. Handling API calls that don't deal with JSON objects.
  3. Documentation.
  4. Potentially: Concrete (de)serialization for aggregations and aggregation results
  5. Metric aggregations can have an empty body (check: all or some of them?) when used as a sub-aggregation underneath certain other aggregations.
  6. Performance (ensure use of persistent HTTP connections, etc.).
  7. All URI options are just String (or things that implement ToString), sometimes the values will be arrays that should be coerced into various formats.
  8. Check type of "timeout" option on Search...

Licence

   Copyright 2015-2017 Ben Ashford

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
Comments
  • Initial stab at converting to use Serde 1.0

    Initial stab at converting to use Serde 1.0

    This is an initial attempt to convert the library to use Serde 1.0 (ref: #108). It's the first time I've done this particular conversion, so there's a couple things that aren't yet finished. In particular, there's some uses of #[serde(bound(deserialize = ""))] which I'm not happy about; that particular solution comes from https://github.com/serde-rs/serde/issues/891.

    Any advice appreciated - or, if other folks want to build off this work, happy to have anyone use this as a starting point.

    opened by andrew-d 8
  • Can't compile on the latest nightly

    Can't compile on the latest nightly

    rustc 1.11.0-nightly (1ab87b65a 2016-07-02)

       Compiling rs-es v0.4.4
    /Users/giovanni/.cargo/registry/src/github.com-1ecc6299db9ec823/rs-es-0.4.4/src/operations/search/aggregations/bucket.rs:106:17: 106:26 error: overflow evaluating the requirement `query::Query: serde::Serialize` [E0275]
    /Users/giovanni/.cargo/registry/src/github.com-1ecc6299db9ec823/rs-es-0.4.4/src/operations/search/aggregations/bucket.rs:106 #[derive(Debug, Serialize)]
                                                                                                                                                 ^~~~~~~~~
    /Users/giovanni/.cargo/registry/src/github.com-1ecc6299db9ec823/rs-es-0.4.4/src/operations/search/aggregations/bucket.rs:106:17: 106:26 help: run `rustc --explain E0275` to see a detailed explanation
    /Users/giovanni/.cargo/registry/src/github.com-1ecc6299db9ec823/rs-es-0.4.4/src/operations/search/aggregations/bucket.rs:106:17: 106:26 note: required by `serde::Serialize`
    error: aborting due to previous error
    error: Could not compile `rs-es`.
    

    P.S.: New versions of serde and its dependencies have been released.

    opened by RoxasShadow 8
  • Refreshing index not working

    Refreshing index not working

    Hi -- in order to have my tests run I regularly call refresh on the client. The hope is that doing so will commit any changes I've made that I'm relying on -- for instance a document being created or all documents being deleted. This testing approach has worked when I've used ElasticSearch in other languages -- namely Ruby and JavaScript. However I'm getting intermittent failures in my tests where the index should be refreshed and up-to-date with the data I expect it to have. By way of debugging I've put a sleep delay of 1500 ms in there after the call to refresh() and before the assertion but that hasn't stopped the intermittent error that occurs there. So I thought I should check here to be sure that refreshing the index is supposed to work in the manner I expect it to? Thanks for any and all advice, Doug.

    opened by biot023 6
  • Upgrade Hyper to 0.11

    Upgrade Hyper to 0.11

    Hyper 0.11 introduces asynchronous processing with Tokio and Futures: https://hyper.rs/guides/client/basic/

    We should consider how to modify rs-es to use it. In principal we should adopt Futures as well, this is because ElasticSearch operations can involve non-trivial waiting time, and this would allow application authors to use ElasticSearch without blocking threads, etc.

    opened by benashford 6
  • [WIP] move from hyper to reqwest

    [WIP] move from hyper to reqwest

    As per issue #110, I have decided to migrate rs-es from hyper to reqwest. Initially I started porting to the latest hyper version, but quickly found that we will just end up copying oneshot completion code from reqwest, so I just decided to take the level of abstraction one level higher. This migration was surprisingly easy due to the structure of rs-es. The benefits of reqwest apart from maintainability are that ability to build custom clients and pass these to rs-es, similar to that of the elastic rust library.

    I have left this as WIP as I have not updated the documentation, or thoroughly tested.

    opened by alexkornitzer 5
  • Hyper v1.0

    Hyper v1.0

    This PR upgrades hyper to the latest hyper version, which introduces asynchronous I/O thanks to tokio. Since they removed internal support to TLS/SSL, we'll get it from hyper-openssl.

    Maybe we can give precedence to #97 as this PR is based on it.

    opened by RoxasShadow 5
  • Manage URLs with rust-url

    Manage URLs with rust-url

    In this way we can give all the responsibility to rust-url and allow HTTPS URLs.

    Calling as_str() on a Url at every request should not be a problem as they store the slice upfront, based on the docs.

    This would be a breaking change obviously.

    opened by RoxasShadow 5
  • Implement settings

    Implement settings

    This PR implements the Settings struct allowing users to configure their indexes, first of all in order to define their custom analyzers.

    Unfortunately this is a broken change as I preferred following the pattern used in other modules of this projects in order to make the usage of MappingOperation more flexible.

    opened by RoxasShadow 5
  • Re-enable Nightly

    Re-enable Nightly

    The Nightly Travis build is currently disabled due to an issue with an upstream dependency (Serde, in particular Quasi) being built on Nightly.

    We should upgrade the dependency as soon as a fix is available, and re-enable the nightly build.

    opened by benashford 5
  • could not compile with serde_macros depdendency

    could not compile with serde_macros depdendency

    please take a look at https://github.com/serde-rs/quasi/issues/35#issuecomment-202404870 i've exacly the same problem, but couldn't solve this by using

    [dependencies.rs-es] version = "0.3.2" features = ["nightly_without_ssl"]

    bug 
    opened by timglabisch 5
  • Add execute() to DeleteOperation

    Add execute() to DeleteOperation

    Sorry for still bothering you!

    Basically there are some kind of operations, like the deletion of an index, that will just return { "acknowledge": true } in case of success.

    This PR adds the function execute that return straight the status of the request.

    opened by RoxasShadow 5
  • Restructure distance sorting JSON.

    Restructure distance sorting JSON.

    This fixes issue #150, where the JSON is wrong for distance sorting. The structure is a little non-standard, as it is wrapped in an object with hardcoded _geo_distance as name.

    Reaching through inner-outer and tupled structs is unfortunate, but seems to be nessecary in order to keep the API the same and to keep everything nicely contained inside a GeoDistance struct as seen from the consumer or user.

    opened by berkes 0
  • Distance sort uses the wrong json structure.

    Distance sort uses the wrong json structure.

    When crafting a distance sort, we get a wrong JSON structure:

       use rs_es::units as rs_u;
       use rs_es::operations::search::{Order, GeoDistance};
    
       GeoDistance::new("coord")
           .with_location(rs_u::Location::LatLon(coord.y, coord.x))
           .with_order(Order::Desc)
           .with_unit(rs_u::DistanceUnit::Meter)
           .build();
    

    This gives us a:

        {
            "field": "coord",
            "location": {
                "lat": 51.84222,
                "lon": 5.85938
            },
            "order": "desc",
            "unit": "m"
        }
    

    But it should be:

        {
            "_geo_distance": {
                "coord": {
                    "lat": 51.84222,
                    "lon": 5.85938
                },
                "order": "desc",
                "unit": "m"
            }
        }
    
    opened by berkes 0
  • bulk operation fail is not correctly parsed

    bulk operation fail is not correctly parsed

    if one of a bulk operation fail, the return result will not contain field that are here only when succefull such as shards and _version, so this result in a jsonerror and so it's impossible to get the error.

    Put their field as Option work but there is no error field, I add it in a fork and try this:

    #[derive(Debug, serde::Deserialize)]
    pub struct BulkError {
        #[serde(rename = "type")]
        pub kind: String,
        pub reason: String,
        pub index_uuid: String,
        pub shard: String,
        pub index: String,
    }
    

    following https://www.elastic.co/guide/en/elasticsearch/reference/7.7/docs-bulk.html#bulk-api-response-body but it's reported that index_uuid was missing, my elastic search is 7.3 so I look https://www.elastic.co/guide/en/elasticsearch/reference/7.3/docs-bulk.html#bulk-api-response-body but response body is not documented, this mean elastic introduce some breaking change in minor version as well, at this point I try to stay calm, anyway, I will try soon:

    #[derive(Debug, serde::Deserialize)]
    pub struct BulkError {
        #[serde(rename = "type")]
        pub kind: String,
        pub reason: String,
        pub index_uuid: Option<String>,
        pub shard: String,
        pub index: String,
    }
    
    opened by Stargateur 0
  • An extra field `geo_box` is added to the query.

    An extra field `geo_box` is added to the query.

    I'm not sure if I understand the code correct, so I may be using it all wrong.

    I'm trying to implement a geo bounding box on a field called coord.

    let bbox = Query::build_geo_bounding_box("coord", ((50.5, -10.5), (50.0, -10.0))).build();
    let query = Query::build_bool().with_filter(bbox).build();
    
    let mut search_query = rubber.es_client.search_query();
    
    let search_query = search_query
            .with_ignore_unavailable(true)
            .with_query(&query);
    
    let result = search_query.send()?;
    

    This fails with an error from ES:

     "failed to parse [geo_bbox] query. unexpected field [geo_box]"
    

    Probably, because there is no field geo_bbox. Examining the JSON sent to ES gives me:

    {
      "query": {
        "bool": {
          "filter": {
            "geo_bounding_box": {
              "coord": {
                "geo_box": {
                  "bottom_right": {
                    "lat": 50,
                    "lon": -10
                  },
                  "top_left": {
                    "lat": 50.5,
                    "lon": -10.5
                  }
                }
              }
            }
          }
        }
      }
    

    The offending extra geo_box is nested in coords.

    The correct JSON, If I understand the ES2 Bounding box DSL correctly, should be:

    {
      "query": {
        "bool": {
          "filter": {
            "geo_bounding_box": {
              "coord": {
                "bottom_right": {
                  "lat": 50,
                  "lon": -10
                },
                "top_left": {
                  "lat": 50.5,
                  "lon": -10.5
                }
              }
            }
          }
        }
      }
    

    I'm not sure where that extra geo_bbox comex from, and if it is configurable. From reading the code with my limited understanding of rust, I see no way to disable this, configure this, or otherwise remove that part. Did I miss something?

    opened by berkes 1
Owner
Ben Ashford
Ben Ashford
🦀 REST API client implementation for freee, auto-generated from OpenAPI specification.

freee-rs REST API client implementation for freee, auto-generated from OpenAPI specification. Getting Started Add to your Cargo.toml as follows: [depe

Naoki Ikeguchi 3 Jul 14, 2022
📺 Netflix in Rust/ React-TS/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Kafka, Redis, Tokio, Actix, Elasticsearch, Influxdb Iox, Tensorflow, AWS

Fullstack Movie Streaming Platform ?? Netflix in RUST/ NextJS, Actix-Web, Async Apollo-GraphQl, Cassandra/ ScyllaDB, Async SQLx, Spark, Kafka, Redis,

null 34 Apr 17, 2023
Making Postgres and Elasticsearch work together like it's 2021

Making Postgres and Elasticsearch work together like it's 2021 Readme ZomboDB brings powerful text-search and analytics features to Postgres by using

ZomboDB 4.2k Jan 2, 2023
A Rust-based comment server using SQLite and an intuitive REST API.

soudan A Rust-based comment server using SQLite and an intuitive REST API. Soudan is built with simplicity and static sites in mind. CLI usage See sou

Elnu 0 Dec 19, 2022
High performance and distributed KV store w/ REST API. 🦀

About Lucid KV High performance and distributed KV store w/ REST API. ?? Introduction Lucid is an high performance, secure and distributed key-value s

Lucid ᵏᵛ 306 Dec 28, 2022
The most efficient, scalable, and fast production-ready serverless REST API backend which provides CRUD operations for a MongoDB collection

Optimal CRUD Mongo Goals of This Project This is meant to be the most efficient, scalable, and fast production-ready serverless REST API backend which

Evaluates2 1 Feb 22, 2022
SubZero - a standalone web server that turns your database directly into a REST/GraphQL api

What is this? This is a demo repository for the new subzero codebase implemented in Rust. subZero is a standalone web server that turns your database

subZero 82 Jan 1, 2023
This crate allows you to send cypher queries to the REST endpoint of a neo4j database

rusted_cypher Rust crate for accessing the cypher endpoint of a neo4j server This crate allows you to send cypher queries to the REST endpoint of a ne

Livio Ribeiro 68 Dec 1, 2022
Telegram bot API client for Rust

Frankenstein Telegram bot API client for Rust. It's a complete wrapper for Telegram bot API and it's up to date with version 5.2 of the API. Frankenst

Ayrat Badykov 136 Jan 1, 2023
Affine-client is a client for AFFINE based on Tauri

Affine Client affine-client is a client for AFFINE based on Tauri Supported Platforms Windows Linux MacOS Download https://github.com/m1911star/affine

Horus 216 Dec 25, 2022
A multi-instance, Discord/Spacebar API-compatible chat client

Polyphony A multi-instance, Discord/Spacebar API-compatible chat client, written in Rust and Svelte (TypeScript) using Tauri. Explore the docs » Repor

null 5 Mar 30, 2023
A multi-instance, Discord/Spacebar API-compatible chat client

Polyphony A multi-instance, Discord/Spacebar API-compatible chat client, written in Rust and Svelte (TypeScript) using Tauri. Explore the docs » Repor

null 6 Apr 3, 2023
Cassandra DB native client written in Rust language. Find 1.x versions on https://github.com/AlexPikalov/cdrs/tree/v.1.x Looking for an async version? - Check WIP https://github.com/AlexPikalov/cdrs-async

CDRS CDRS is looking for maintainers CDRS is Apache Cassandra driver written in pure Rust. ?? Looking for an async version? async-std https://github.c

Alex Pikalov 338 Jan 1, 2023
CouchDB client-side library for the Rust programming language

Chill Chill is a client-side CouchDB library for the Rust programming language, available on crates.io. It targets Rust Stable. Chill's three chief de

null 35 Jun 26, 2022
An etcd client library for Rust.

etcd An etcd client library for Rust. etcd on crates.io Documentation for the latest crates.io release Running the tests Install Docker and Docker Com

Jimmy Cuadra 138 Dec 27, 2022
Mysql client library implemented in rust.

mysql This crate offers: MySql database driver in pure rust; connection pool. Features: macOS, Windows and Linux support; TLS support via nativetls cr

Anatoly I 548 Dec 31, 2022
Streaming STOMP client for Rust

tokio-stomp An async STOMP client (and maybe eventually, server) for Rust, using the Tokio stack. It aims to be fast and fully-featured with a simple

null 7 Jun 15, 2022
Official Skytable client driver for Rust

Skytable client Introduction This library is the official client for the free and open-source NoSQL database Skytable. First, go ahead and install Sky

Skytable 29 Nov 24, 2022
Official Rust client for Central Dogma

centraldogma-rs Official Rust Client for Central Dogma. Full documentation is available at https://docs.rs/centraldogma Getting started Installing Add

LINE 44 Oct 13, 2022