A high-performance observability data pipeline.

Overview

Get Started  •   Docs  •   Guides  •   Integrations  •   Chat  •   Download

Vector

What is Vector?

Vector is a high-performance, end-to-end (agent & aggregator) observability data platform that puts you in control of your observability data. Collect, transform, and route all your logs, metrics, and traces to any vendors you want today and any other vendors you may want tomorrow. Vector enables cost reduction, novel data enrichment, and data security when you need it, not when is most convenient for your vendors. 100% open source and up to 10x faster than every alternative.

To get started, follow our getting started guides or install Vector.

Principles

  • Reliable - Built in Rust, Vector's primary design goal is reliability.
  • End-to-end - Deploys as an agent or aggregator. Vector is a complete platform.
  • Unified - Logs, metrics, and traces (coming soon). One tool for all of your data.

Use cases

  • Reduce total observability costs.
  • Transition vendors without disrupting workflows.
  • Enhance data quality and improve insights.
  • Consolidate agents and eliminate agent fatigue.
  • Improve overall observability performance and reliability.

Community

  • Vector is relied on by startups and enterprises like Atlassian, T-Mobile, Comcast, Zendesk, Discord, Fastly, CVS, Trivago, Tuple, Douban, Visa, Mambu, Blockfi, Claranet, Instacart, Forcepoint, and many more.
  • Vector is downloaded over 100,000 times per day.
  • Vector's largest user processes over 30TB daily.
  • Vector has over 100 contributors and growing.

Documentation

About

Setup

Reference

Administration

Resources

Comparisons

Performance

The following performance tests demonstrate baseline performance between common protocols with the exception of the Regex Parsing test.

Test Vector Filebeat FluentBit FluentD Logstash SplunkUF SplunkHF
TCP to Blackhole 86mib/s n/a 64.4mib/s 27.7mib/s 40.6mib/s n/a n/a
File to TCP 76.7mib/s 7.8mib/s 35mib/s 26.1mib/s 3.1mib/s 40.1mib/s 39mib/s
Regex Parsing 13.2mib/s n/a 20.5mib/s 2.6mib/s 4.6mib/s n/a 7.8mib/s
TCP to HTTP 26.7mib/s n/a 19.6mib/s <1mib/s 2.7mib/s n/a n/a
TCP to TCP 69.9mib/s 5mib/s 67.1mib/s 3.9mib/s 10mib/s 70.4mib/s 7.6mib/s

To learn more about our performance tests, please see the Vector test harness.

Correctness

The following correctness tests are not exhaustive, but they demonstrate fundamental differences in quality and attention to detail:

Test Vector Filebeat FluentBit FluentD Logstash Splunk UF Splunk HF
Disk Buffer Persistence
File Rotate (create)
File Rotate (copytruncate)
File Truncation
Process (SIGHUP)
JSON (wrapped)

To learn more about our correctness tests, please see the Vector test harness.

Features

Vector is an end-to-end, unified, open data platform.

Vector Beats Fluentbit Fluentd Logstash Splunk UF Splunk HF
End-to-end
Agent
Aggregator
Unified
Logs
Metrics
Traces 🚧
Open
Open-source
Vendor-neutral
Reliability
Memory-safe
Delivery guarantees
Multi-core

= Not interoperable, metrics are represented as structured logs


Developed with ❤️ by Timber.io - Security Policy - Privacy Policy

Comments
  • perf: Tokio compat

    perf: Tokio compat

    This PR follows the proposal for #1142

    It switches our runtime to tokio 0.2 with very few changes, and the value in this is that we can run benchmarks and performance tests to ensure there's no degradation from upgrading to the new tokio reactor.

    Addresses #1695 and #1696.

    Current state (updated as we go):

    • [x] fd leak (https://github.com/tokio-rs/tokio-compat/issues/27)
    • [x] test_max_size and test_max_size_resume tests failing (it was happening from before, https://github.com/timberio/vector/pull/1922#issuecomment-597018268)
    • [x] test_source_panic (we concluded it was failing before, https://github.com/timberio/vector/pull/1922#issuecomment-594098317)
    • [x] sinks::statsd::test::test_send_to_statsd failing (see #2016 and #2026)
    • [x] benchmarks (see https://github.com/timberio/vector/pull/1922#issuecomment-594037080 and https://github.com/timberio/vector/pull/1922#issuecomment-594042060)
    • [x] test harness comparison (got raw data loaded and ready at S3, see https://github.com/timberio/vector/pull/1922#issuecomment-594114561 and https://github.com/timberio/vector/pull/1922#issuecomment-594120453; report: https://github.com/timberio/vector/pull/1922#issuecomment-597817765)
    • [x] additional comparison results (https://github.com/timberio/vector/pull/1922#issuecomment-600077012 and https://github.com/timberio/vector/pull/1922#issuecomment-600301648)

    There's a tokio-compat-debug branch that I use to dump the trashed code version altered with extensive debugging. I'm only using it to run the code against the CI, since my local setup doesn't reproduce the issues., and that branch isn't supposed to be merged. Rather, we'll just take the end results from it, if there are any.

    type: enhancement domain: performance 
    opened by MOZGIII 91
  • feat(sources): opentelemetry log

    feat(sources): opentelemetry log

    Example:

    sources:
      otel:
        type: opentelemetry
        address: 0.0.0.0:6788
    

    the opentelemetry log will be converted into EventLog:

    {
      // optional
      "attributes": {"k1": "v1"},
      "resources": {"k1": "v1"},
      "message": "xxx",
      "trace_id": "xxx", 
      "span_id": "xxx",
      "severity_number": 1,
      "severity_text": "xxx",
      "flags": 1,
    
      // required
      "timestamp": ts_nano,
      "observed_time_unix_nano": ts_nano, //source will set this as current ts if it's not present
      "dropped_attributes_count": 0
    
    }
    

    I've been testing this source in our production for a week, and it works well! Hope this feature can be merged and I can continue to work on opentelemetry-metrics

    domain: sources domain: external docs 
    opened by caibirdme 67
  • feat(new sink): Initial `pulsar` sink implementation

    feat(new sink): Initial `pulsar` sink implementation

    closes #690

    Missing tests and proper healthcheck right now, I just wanted to show this version to get some feedback. It's based largely on the kafka sink. It impl's Sink, like the kafka sink it holds the in flight futures and Acks back after completion.

    Let me know if this approach is suitable, if there are some configuration options missing we'd like to add, etc. I'd like to hold off on SSL at the moment because I don't think it's well supported in the underlying crate. This change depends on a change to SinkContext and pulsar-rs also, worth looking at that.

    opened by leshow 64
  • [Merged by Bors] - chore: Add a test implementation of bors

    [Merged by Bors] - chore: Add a test implementation of bors

    This PR enables https://bors.tech for our PR merge workflow. Now we run an abbreviated lint-centric check on the PR rather than the full test suite. We've achieved this by running the lint tests inside a container running the timberio/ci_image Docker image.

    We also consolidated and updated the image build process in environment.yml to keep the ci_image up-to-date.

    Then, instead of pressing merge, we use the bors commands to submit the PR to a staging branch where CI is run. The new CI run merges the prior e2e and tests jobs as a group. If these return green then the staging branch is fast-forwarded/merged into master.

    Signed-off-by: James Turnbull [email protected]

    domain: tests domain: ci 
    opened by jamtur01 49
  • feat(new source): Initial `kubernetes_logs` implementation

    feat(new source): Initial `kubernetes_logs` implementation

    This PR implements Kubernetes source according to #2222. There are a few things are different from the RFC:

    • the name for the source is kubernetes_logs - it's wasn't properly addressed at the RFC, but the logs and other source kinds for the kubernetes would very likely have very different internal design, so we should probably keep them as separate units; to disambiguate from the very beginning I modified the name to be kubernetes_logs;
    • had to bump MSKV to 1.15 - watch bookmarks are introduced in that version, and this is what we want to have to avoid extra desync issues.

    Done:

    • closes #2626;
    • closes #2627;
    • closes #2628;
    • closes #2629;
    • closes #2630;
    • closes #2631.

    To do (before the merge):

    • [x] shutdown logic fix;
    • [x] address the fingerprinting issue (#2890);
    • [x] move the annotation process to the create_event to optimize out access to the file field;
    • [x] ensure that MultiResponseDecoder doesn't leak memory;
    • [x] add new dependencies to the environment (minikube, skaffold, etc);
    • [x] bookmark parsing error (https://github.com/Arnavion/k8s-openapi/issues/70).

    To do (after the merge):

    • more configuration and knobs;
    • #2632 (integration tests);
    • #2633 (documentation).

    External refs:

    • https://github.com/aws/containers-roadmap/issues/809
    • https://github.com/aws/containers-roadmap/issues/61

    The overall approach was to build highly composable components, so that we can reuse them for further work - like for adding a sidecar-oriented transform for pod annotation, or exclusions based on namespace labels.

    opened by MOZGIII 49
  • enhancement(elasticsearch sink): Multiple hosts

    enhancement(elasticsearch sink): Multiple hosts

    Ref #3649

    Implements multiple host feature for elasticsearch sink with failover. Where the amount of inflight requests for each endpoint is an estimate of its load.

    Possible extensions

    • Related to splitting ARC layer, https://github.com/vectordotdev/vector/issues/3649#issuecomment-1148033374, it's possible to split of from ARC controller statistic gathering component although some changes/refactoring will be needed. Once extracted, that component can be added to HealthService or even better can be it's own layer which would provide load estimate for a specific endpoint where the load would be something like in_flight/current_limit as float.
    • Sniffing feature can be added by implementing Stream for it and replacing fixed list of services with it at https://github.com/ktff/vector/blob/07bbc61be01ca16b4396b4554c144c9bbadec6c4/src/sinks/elasticsearch/config.rs#L358
    • Reusing distribution layer should be relatively straightforward for sinks that use TowerRequestSettings.

    Todo

    • [x] Internal metrics
    • [x] Internal logs
    • [x] Documentation
    domain: sinks domain: external docs 
    opened by ktff 48
  • enhancement(vrl): Implement a VM for VRL

    enhancement(vrl): Implement a VM for VRL

    This is the first iteration on a Virtual Machine for running VRL code.

    The VM is currently behind a feature flag vrl-vm which is off by default, so running this branch will still run the old tree walking interpreter.

    Most of the VM code is found in https://github.com/vectordotdev/vector/tree/stephen/vrl-vm/lib/vrl/compiler/src/vm

    Currently, a VRL program is parsed to an AST of nodes that implement Expression. Running the program involves calling resolve on each node.

    To compile to bytecode, the Expression trait has a new method called compile_to_vm - https://github.com/vectordotdev/vector/blob/stephen/vrl-vm/lib/vrl/compiler/src/expression.rs#L58-L60. Compiling is now a process of calling compile_to_vm which generates the bytecode for the VM.

    No existing tree walking code has been changed, so it is possible to run both tree walking and VM side by side to ensure that no actual functionality is changed.

    Not all functions in the stdlib have been converted to run with the VM. The following are supported:

    array contains del downcase exists flatten integer is_array is_boolean is_float is_integer is_null is_object is_regex is_string is_timestamp match_datadog_query merge now object parse_groks parse_json parse_key_value parse_regex parse_regex_all parse_url push slice starts_with to_string to_timestamp upcase

    Both the VRL Cli and Tests projects work with the VM. A feature flag of vrl-vm needs to be set in order to use it.


    Old text:

    This is a very rough proof of concept for a ByteCode Virtual Machine and Vrl -> Bytecode compiler.

    It is capable of running the following Vrl (and not much else):

    .hostname = "vector"
    
    if .status == "warning" {
      .thing = upcase(.hostname)
    } else if .status == "notice" {
      .thung = downcase(.hostname)
    } else {
      .nong = upcase(.hostname)
    }
    
    .matches = { "name": .message, "num": "2" }
    .origin, .err = .hostname + "/" + .matches.name + "/" + .matches.num
    

    Initial and very rough benchmarks indicate that it was able to reduce the runtime for 1,000,000 events from 17 seconds to 10.

    I do need to double check these figures as I find them suprisingly optimistic. But, it does suggest that this approach could be a very beneficial avenue to further pursue.

    domain: transforms domain: vrl 
    opened by StephenWakely 44
  • QA the new automatic concurrency feature

    QA the new automatic concurrency feature

    https://github.com/timberio/vector/pull/3094 introduces the ability for Vector to automatically determine the optimal in_flight_limit value. RFC 1858 outlines this feature. Given the automatic nature of this feature, we need to take care to QA this more than usual.

    Setup

    1. First, this should be a black box, end to end, integration style of testing. @bruceg covered unit tests in #3094. We want to test real-world usage as much as possible.
    2. While we could set up a real service, like Elasticsearch, I think we're better off creating a simple simulator that will give us more control over scenarios. This could build upon our http_test_server project.

    Questions

    We want to answer:

    1. How quickly does Vector find the optimal value given the http sink defaults?
    2. Does Vector correctly back off in the event of total service failure?
    3. Does Vector correctly back off in the event of gradually increasing response times?
    4. Does Vector correctly back off in the event of sudden sustained increasing response times?
    5. Does Vector correctly ignore one-off errors that are not indicative of service degradation? Such as one-off network errors. We want to make sure Vector does not overreact in these situations.
    type: task domain: networking 
    opened by binarylogic 44
  • feat(sources): New amqp (Rabbitmq) Source/Sink

    feat(sources): New amqp (Rabbitmq) Source/Sink

    Add amqp source and sink.

    Closes: https://github.com/vectordotdev/vector/issues/670 Closes: https://github.com/vectordotdev/vector/issues/558

    domain: sources domain: sinks domain: ci domain: external docs 
    opened by dbcfd 42
  • enhancement(observability): add tracking allocations

    enhancement(observability): add tracking allocations

    This PR begins tracking allocations w/ the allocation tracer. Based on the work from #14933 .

    domain: topology 
    opened by arshiyasolei 41
  • feat(windows platform): add a way to register and run vector as a service

    feat(windows platform): add a way to register and run vector as a service

    On Windows, Vector should be able to run as a service.

    This must be done in two distinct steps:

    First, the user must register vector as a service in the Service Control Manager. A new --service command-line parameter has been introduced (windows only), with multiple possible values:

    • --service install install vector as a service
    • --service uninstall uninstall the service
    • --service start start the service
    • --service stop stop the service

    Then, vector must provide a service entry point and register itself with the Service Control Manager. A dependency to the windows-service has been introduced to handle service install and registration.

    The main function now has been split into two distinct main (old main #[cfg(not(windows))] and a new main for #[cfg(windows)].

    The new main for Windows first attempts to start as a service, and then falls back to console mode. The launch code has been extracted to a new service.rs file so that it could be called from the part that handles the startup of the service on Windows.

    I think this PR still needs some work. I'm pretty new to Rust (coming from a C++ background), so my coding style might not fit your standards. I also need some suggestion on how to properly handle and wait for the sources to shutdown properly before stopping the service on Windows (vector_windows.rs:60)

    This feature has been inspired from telegraf.

    platform: windows 
    opened by oktal 41
  • chore(deps): bump enum_dispatch from 0.3.8 to 0.3.9

    chore(deps): bump enum_dispatch from 0.3.8 to 0.3.9

    Bumps enum_dispatch from 0.3.8 to 0.3.9.

    Changelog

    Sourced from enum_dispatch's changelog.

    0.3.9

    • Add support for const generics (#51, !25)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps 
    opened by dependabot[bot] 4
  • chore(deps): bump once_cell from 1.16.0 to 1.17.0

    chore(deps): bump once_cell from 1.16.0 to 1.17.0

    Bumps once_cell from 1.16.0 to 1.17.0.

    Changelog

    Sourced from once_cell's changelog.

    1.17.0

    • Add race::OnceRef for storing a &'a T.
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps domain: vrl domain: core 
    opened by dependabot[bot] 4
  • chore(deps): bump pest_derive from 2.5.1 to 2.5.2

    chore(deps): bump pest_derive from 2.5.1 to 2.5.2

    Bumps pest_derive from 2.5.1 to 2.5.2.

    Release notes

    Sourced from pest_derive's releases.

    v2.5.2

    What's Changed

    New Contributors

    Full Changelog: https://github.com/pest-parser/pest/compare/v2.5.1...v2.5.2

    Happy Holidays and Best Wishes for 2023! ☃️🎄 🎆

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps 
    opened by dependabot[bot] 5
  • Add `multiline` configuration to all Log sources

    Add `multiline` configuration to all Log sources

    A note for the community

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    Use Cases

    Users wish to aggregate multiple lines in sources. Some sources support this like file, but others do not. There is the reduce transform but it suffers from two gaps: it is a performance bottleneck due to the lack of https://github.com/vectordotdev/vector/issues/11857 and it is a bit more awkward to use because the multiline config automatically groups lines given their source but this has to be manually done for the reduce transform.

    Attempted Solutions

    No response

    Proposal

    Add multiline config to all sources that can accept logs.

    References

    No response

    Version

    vector 0.26.0

    domain: sources type: feature 
    opened by jszwedko 0
  • New sink: postgres

    New sink: postgres

    A note for the community

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    Use Cases

    Users want to write data to Postgres from Vector.

    Breaking this off from https://github.com/vectordotdev/vector/issues/939

    Attempted Solutions

    No response

    Proposal

    No response

    References

    • https://github.com/vectordotdev/vector/issues/6556
    • https://github.com/vectordotdev/vector/issues/939

    Version

    vector 0.26.0

    type: feature sink: new 
    opened by jszwedko 3
Releases(v0.26.0)
Owner
Timber
🌲 Log better. Solve problems faster.
Timber
A high-performance observability data pipeline.

Get Started • Docs • Guides • Integrations • Chat • Download What is Vector? Vector is a high-performance, end-to-end (agent & aggregator) observabili

Timber 12.1k Jan 7, 2023
New generation decentralized data warehouse and streaming data pipeline

World's first decentralized real-time data warehouse, on your laptop Docs | Demo | Tutorials | Examples | FAQ | Chat Get Started Watch this introducto

kamu 184 Dec 22, 2022
🍖A WGPU graphics pipeline, along with simple types used to marshal data to the GPU

renderling ?? This library is a collection of WGPU render pipelines. Shaders are written in GLSL. shaderc is used to compile shaders to SPIR-V. Defini

Schell Carl Scivally 5 Dec 20, 2022
Testing out if Rust can be used for a normal Data Engineering Pipeline.

RustForDataPipelines Testing out if Rust can be used for a normal Data Engineering Pipeline. Check out the full blog post here. https://www.confession

Daniel B 7 Feb 17, 2023
Stream-based visual programming language for systems observability

Stream-based visual programming language for systems observability. Metalens allows to build observability programs in a reactive and visual way. On L

Nikita Baksalyar 53 Dec 23, 2022
An online pipeline for image processing.

carbaseus An online pipeline for image processing. *carbaseus (feminine carbasea, neuter carbaseum)* : first/second-declension adjective 1. made of f

Maël 5 Sep 12, 2022
High-performance runtime for data analytics applications

Weld Documentation Weld is a language and runtime for improving the performance of data-intensive applications. It optimizes across libraries and func

Weld 2.9k Dec 28, 2022
High-performance runtime for data analytics applications

Weld Documentation Weld is a language and runtime for improving the performance of data-intensive applications. It optimizes across libraries and func

Weld 2.9k Jan 7, 2023
Noria: data-flow for high-performance web applications

Noria: data-flow for high-performance web applications Noria is a new streaming data-flow system designed to act as a fast storage backend for read-he

MIT PDOS 4.5k Dec 28, 2022
A high-performance WebSocket integration library for streaming public market data. Used as a key dependency of the `barter-rs` project.

Barter-Data A high-performance WebSocket integration library for streaming public market data from leading cryptocurrency exchanges - batteries includ

Barter 23 Feb 3, 2023
A lightweight and high-performance order-book designed to process level 2 and trades data. Available in Rust and Python

ninjabook A lightweight and high-performance order-book implemented in Rust, designed to process level 2 and trades data. Available in Python and Rust

Ninja Quant 134 Jul 22, 2024
A guide for Mozilla's developers and data scientists to analyze and interpret the data gathered by our data collection systems.

Mozilla Data Documentation This documentation was written to help Mozillians analyze and interpret data collected by our products, such as Firefox and

Mozilla 75 Dec 1, 2022
Scalable and fast data store optimised for time series data such as financial data, events, metrics for real time analysis

OnTimeDB Scalable and fast data store optimised for time series data such as financial data, events, metrics for real time analysis OnTimeDB is a time

Stuart 2 Apr 5, 2022
kytan: High Performance Peer-to-Peer VPN in Rust

kytan: High Performance Peer-to-Peer VPN kytan is a high performance peer to peer VPN written in Rust. The goal is to to minimize the hassle of config

Chang Lan 368 Dec 31, 2022
A high performance blockchain kernel for enterprise users.

English | 简体中文 What is CITA CITA is a fast and scalable blockchain kernel for enterprises. CITA supports both native contract and EVM contract, by whi

CITAHub 1.3k Dec 22, 2022
High performance and distributed KV store w/ REST API. 🦀

About Lucid KV High performance and distributed KV store w/ REST API. ?? Introduction Lucid is an high performance, secure and distributed key-value s

Lucid ᵏᵛ 306 Dec 28, 2022
Rust high performance xml reader and writer

quick-xml High performance xml pull reader/writer. The reader: is almost zero-copy (use of Cow whenever possible) is easy on memory allocation (the AP

Johann Tuffe 802 Dec 31, 2022
High-performance browser-grade HTML5 parser

html5ever API Documentation html5ever is an HTML parser developed as part of the Servo project. It can parse and serialize HTML according to the WHATW

Servo 1.7k Jan 4, 2023
High performance Rust ECS library

Legion aims to be a feature rich high performance Entity component system (ECS) library for Rust game projects with minimal boilerplate. Getting Start

Amethyst Engine 1.4k Jan 5, 2023
High-performance log search engine.

NOTE: This project is under development, please do not depend on it yet as things may break. MinSQL MinSQL is a log search engine designed with simpli

High Performance, Kubernetes Native Object Storage 359 Nov 27, 2022