A high-performance, high-reliability observability data pipeline.

Overview

Quickstart  •   Docs  •   Guides  •   Integrations  •   Chat  •   Download

Vector

What is Vector?

Vector is a high-performance, end-to-end (agent & aggregator) observability data pipeline that puts you in control of your observability data. Collect, transform, and route all your logs, metrics, and traces to any vendors you want today and any other vendors you may want tomorrow. Vector enables dramatic cost reduction, novel data enrichment, and data security where you need it, not where is most convenient for your vendors. Open source and up to 10x faster than every alternative.

To get started, follow our quickstart guide or install Vector.

Principles

  • Reliable - Built in Rust, Vector's primary design goal is reliability.
  • End-to-end - Deploys as an agent or aggregator. Vector is a complete platform.
  • Unified - Logs, metrics, and traces (coming soon). One tool for all of your data.

Use cases

  • Reduce total observability costs.
  • Transition vendors without disrupting workflows.
  • Enhance data quality and improve insights.
  • Consolidate agents and eliminate agent fatigue.
  • Improve overall observability performance and reliability.

Community

  • Vector is relied on by startups and enterprises like Atlassian, T-Mobile, Comcast, Zendesk, Discord, Fastly, CVS, Trivago, Tuple, Douban, Visa, Mambu, Blockfi, Claranet, Instacart, Forcepoint, and many more.
  • Vector is downloaded over 100,000 times per day.
  • Vector's largest user processes over 30TB daily.
  • Vector has over 100 contributors and growing.

Documentation

About

Setup

Reference

Administration

Resources

Comparisons

Performance

The following performance tests demonstrate baseline performance between common protocols with the exception of the Regex Parsing test.

Test Vector Filebeat FluentBit FluentD Logstash SplunkUF SplunkHF
TCP to Blackhole 86mib/s n/a 64.4mib/s 27.7mib/s 40.6mib/s n/a n/a
File to TCP 76.7mib/s 7.8mib/s 35mib/s 26.1mib/s 3.1mib/s 40.1mib/s 39mib/s
Regex Parsing 13.2mib/s n/a 20.5mib/s 2.6mib/s 4.6mib/s n/a 7.8mib/s
TCP to HTTP 26.7mib/s n/a 19.6mib/s <1mib/s 2.7mib/s n/a n/a
TCP to TCP 69.9mib/s 5mib/s 67.1mib/s 3.9mib/s 10mib/s 70.4mib/s 7.6mib/s

To learn more about our performance tests, please see the Vector test harness.

Correctness

The following correctness tests are not exhaustive, but they demonstrate fundamental differences in quality and attention to detail:

Test Vector Filebeat FluentBit FluentD Logstash Splunk UF Splunk HF
Disk Buffer Persistence
File Rotate (create)
File Rotate (copytruncate)
File Truncation
Process (SIGHUP)
JSON (wrapped)

To learn more about our correctness tests, please see the Vector test harness.

Features

Vector is an end-to-end, unified, open data platform.

Vector Beats Fluentbit Fluentd Logstash Splunk UF Splunk HF
End-to-end
Agent
Aggregator
Unified
Logs
Metrics
Traces 🚧
Open
Open-source
Vendor-neutral
Reliability
Memory-safe
Delivery guarantees
Multi-core

= Not interoperable, metrics are represented as structured logs


Developed with ❤️ by Timber.io - Security Policy - Privacy Policy

Comments
  • perf: Tokio compat

    perf: Tokio compat

    This PR follows the proposal for #1142

    It switches our runtime to tokio 0.2 with very few changes, and the value in this is that we can run benchmarks and performance tests to ensure there's no degradation from upgrading to the new tokio reactor.

    Addresses #1695 and #1696.

    Current state (updated as we go):

    • [x] fd leak (https://github.com/tokio-rs/tokio-compat/issues/27)
    • [x] test_max_size and test_max_size_resume tests failing (it was happening from before, https://github.com/timberio/vector/pull/1922#issuecomment-597018268)
    • [x] test_source_panic (we concluded it was failing before, https://github.com/timberio/vector/pull/1922#issuecomment-594098317)
    • [x] sinks::statsd::test::test_send_to_statsd failing (see #2016 and #2026)
    • [x] benchmarks (see https://github.com/timberio/vector/pull/1922#issuecomment-594037080 and https://github.com/timberio/vector/pull/1922#issuecomment-594042060)
    • [x] test harness comparison (got raw data loaded and ready at S3, see https://github.com/timberio/vector/pull/1922#issuecomment-594114561 and https://github.com/timberio/vector/pull/1922#issuecomment-594120453; report: https://github.com/timberio/vector/pull/1922#issuecomment-597817765)
    • [x] additional comparison results (https://github.com/timberio/vector/pull/1922#issuecomment-600077012 and https://github.com/timberio/vector/pull/1922#issuecomment-600301648)

    There's a tokio-compat-debug branch that I use to dump the trashed code version altered with extensive debugging. I'm only using it to run the code against the CI, since my local setup doesn't reproduce the issues., and that branch isn't supposed to be merged. Rather, we'll just take the end results from it, if there are any.

    type: enhancement domain: performance 
    opened by MOZGIII 91
  • feat(sources): opentelemetry log

    feat(sources): opentelemetry log

    Example:

    sources:
      otel:
        type: opentelemetry
        address: 0.0.0.0:6788
    

    the opentelemetry log will be converted into EventLog:

    {
      // optional
      "attributes": {"k1": "v1"},
      "resources": {"k1": "v1"},
      "message": "xxx",
      "trace_id": "xxx", 
      "span_id": "xxx",
      "severity_number": 1,
      "severity_text": "xxx",
      "flags": 1,
    
      // required
      "timestamp": ts_nano,
      "observed_time_unix_nano": ts_nano, //source will set this as current ts if it's not present
      "dropped_attributes_count": 0
    
    }
    

    I've been testing this source in our production for a week, and it works well! Hope this feature can be merged and I can continue to work on opentelemetry-metrics

    domain: sources domain: external docs 
    opened by caibirdme 67
  • feat(new sink): Initial `pulsar` sink implementation

    feat(new sink): Initial `pulsar` sink implementation

    closes #690

    Missing tests and proper healthcheck right now, I just wanted to show this version to get some feedback. It's based largely on the kafka sink. It impl's Sink, like the kafka sink it holds the in flight futures and Acks back after completion.

    Let me know if this approach is suitable, if there are some configuration options missing we'd like to add, etc. I'd like to hold off on SSL at the moment because I don't think it's well supported in the underlying crate. This change depends on a change to SinkContext and pulsar-rs also, worth looking at that.

    opened by leshow 64
  • [Merged by Bors] - chore: Add a test implementation of bors

    [Merged by Bors] - chore: Add a test implementation of bors

    This PR enables https://bors.tech for our PR merge workflow. Now we run an abbreviated lint-centric check on the PR rather than the full test suite. We've achieved this by running the lint tests inside a container running the timberio/ci_image Docker image.

    We also consolidated and updated the image build process in environment.yml to keep the ci_image up-to-date.

    Then, instead of pressing merge, we use the bors commands to submit the PR to a staging branch where CI is run. The new CI run merges the prior e2e and tests jobs as a group. If these return green then the staging branch is fast-forwarded/merged into master.

    Signed-off-by: James Turnbull [email protected]

    domain: tests domain: ci 
    opened by jamtur01 49
  • feat(new source): Initial `kubernetes_logs` implementation

    feat(new source): Initial `kubernetes_logs` implementation

    This PR implements Kubernetes source according to #2222. There are a few things are different from the RFC:

    • the name for the source is kubernetes_logs - it's wasn't properly addressed at the RFC, but the logs and other source kinds for the kubernetes would very likely have very different internal design, so we should probably keep them as separate units; to disambiguate from the very beginning I modified the name to be kubernetes_logs;
    • had to bump MSKV to 1.15 - watch bookmarks are introduced in that version, and this is what we want to have to avoid extra desync issues.

    Done:

    • closes #2626;
    • closes #2627;
    • closes #2628;
    • closes #2629;
    • closes #2630;
    • closes #2631.

    To do (before the merge):

    • [x] shutdown logic fix;
    • [x] address the fingerprinting issue (#2890);
    • [x] move the annotation process to the create_event to optimize out access to the file field;
    • [x] ensure that MultiResponseDecoder doesn't leak memory;
    • [x] add new dependencies to the environment (minikube, skaffold, etc);
    • [x] bookmark parsing error (https://github.com/Arnavion/k8s-openapi/issues/70).

    To do (after the merge):

    • more configuration and knobs;
    • #2632 (integration tests);
    • #2633 (documentation).

    External refs:

    • https://github.com/aws/containers-roadmap/issues/809
    • https://github.com/aws/containers-roadmap/issues/61

    The overall approach was to build highly composable components, so that we can reuse them for further work - like for adding a sidecar-oriented transform for pod annotation, or exclusions based on namespace labels.

    opened by MOZGIII 49
  • enhancement(elasticsearch sink): Multiple hosts

    enhancement(elasticsearch sink): Multiple hosts

    Ref #3649

    Implements multiple host feature for elasticsearch sink with failover. Where the amount of inflight requests for each endpoint is an estimate of its load.

    Possible extensions

    • Related to splitting ARC layer, https://github.com/vectordotdev/vector/issues/3649#issuecomment-1148033374, it's possible to split of from ARC controller statistic gathering component although some changes/refactoring will be needed. Once extracted, that component can be added to HealthService or even better can be it's own layer which would provide load estimate for a specific endpoint where the load would be something like in_flight/current_limit as float.
    • Sniffing feature can be added by implementing Stream for it and replacing fixed list of services with it at https://github.com/ktff/vector/blob/07bbc61be01ca16b4396b4554c144c9bbadec6c4/src/sinks/elasticsearch/config.rs#L358
    • Reusing distribution layer should be relatively straightforward for sinks that use TowerRequestSettings.

    Todo

    • [x] Internal metrics
    • [x] Internal logs
    • [x] Documentation
    domain: sinks domain: external docs 
    opened by ktff 48
  • enhancement(vrl): Implement a VM for VRL

    enhancement(vrl): Implement a VM for VRL

    This is the first iteration on a Virtual Machine for running VRL code.

    The VM is currently behind a feature flag vrl-vm which is off by default, so running this branch will still run the old tree walking interpreter.

    Most of the VM code is found in https://github.com/vectordotdev/vector/tree/stephen/vrl-vm/lib/vrl/compiler/src/vm

    Currently, a VRL program is parsed to an AST of nodes that implement Expression. Running the program involves calling resolve on each node.

    To compile to bytecode, the Expression trait has a new method called compile_to_vm - https://github.com/vectordotdev/vector/blob/stephen/vrl-vm/lib/vrl/compiler/src/expression.rs#L58-L60. Compiling is now a process of calling compile_to_vm which generates the bytecode for the VM.

    No existing tree walking code has been changed, so it is possible to run both tree walking and VM side by side to ensure that no actual functionality is changed.

    Not all functions in the stdlib have been converted to run with the VM. The following are supported:

    array contains del downcase exists flatten integer is_array is_boolean is_float is_integer is_null is_object is_regex is_string is_timestamp match_datadog_query merge now object parse_groks parse_json parse_key_value parse_regex parse_regex_all parse_url push slice starts_with to_string to_timestamp upcase

    Both the VRL Cli and Tests projects work with the VM. A feature flag of vrl-vm needs to be set in order to use it.


    Old text:

    This is a very rough proof of concept for a ByteCode Virtual Machine and Vrl -> Bytecode compiler.

    It is capable of running the following Vrl (and not much else):

    .hostname = "vector"
    
    if .status == "warning" {
      .thing = upcase(.hostname)
    } else if .status == "notice" {
      .thung = downcase(.hostname)
    } else {
      .nong = upcase(.hostname)
    }
    
    .matches = { "name": .message, "num": "2" }
    .origin, .err = .hostname + "/" + .matches.name + "/" + .matches.num
    

    Initial and very rough benchmarks indicate that it was able to reduce the runtime for 1,000,000 events from 17 seconds to 10.

    I do need to double check these figures as I find them suprisingly optimistic. But, it does suggest that this approach could be a very beneficial avenue to further pursue.

    domain: transforms domain: vrl 
    opened by StephenWakely 44
  • QA the new automatic concurrency feature

    QA the new automatic concurrency feature

    https://github.com/timberio/vector/pull/3094 introduces the ability for Vector to automatically determine the optimal in_flight_limit value. RFC 1858 outlines this feature. Given the automatic nature of this feature, we need to take care to QA this more than usual.

    Setup

    1. First, this should be a black box, end to end, integration style of testing. @bruceg covered unit tests in #3094. We want to test real-world usage as much as possible.
    2. While we could set up a real service, like Elasticsearch, I think we're better off creating a simple simulator that will give us more control over scenarios. This could build upon our http_test_server project.

    Questions

    We want to answer:

    1. How quickly does Vector find the optimal value given the http sink defaults?
    2. Does Vector correctly back off in the event of total service failure?
    3. Does Vector correctly back off in the event of gradually increasing response times?
    4. Does Vector correctly back off in the event of sudden sustained increasing response times?
    5. Does Vector correctly ignore one-off errors that are not indicative of service degradation? Such as one-off network errors. We want to make sure Vector does not overreact in these situations.
    type: task domain: networking 
    opened by binarylogic 44
  • feat(sources): New amqp (Rabbitmq) Source/Sink

    feat(sources): New amqp (Rabbitmq) Source/Sink

    Add amqp source and sink.

    Closes: https://github.com/vectordotdev/vector/issues/670 Closes: https://github.com/vectordotdev/vector/issues/558

    domain: sources domain: sinks domain: ci domain: external docs 
    opened by dbcfd 42
  • enhancement(observability): add tracking allocations

    enhancement(observability): add tracking allocations

    This PR begins tracking allocations w/ the allocation tracer. Based on the work from #14933 .

    domain: topology 
    opened by arshiyasolei 41
  • feat(windows platform): add a way to register and run vector as a service

    feat(windows platform): add a way to register and run vector as a service

    On Windows, Vector should be able to run as a service.

    This must be done in two distinct steps:

    First, the user must register vector as a service in the Service Control Manager. A new --service command-line parameter has been introduced (windows only), with multiple possible values:

    • --service install install vector as a service
    • --service uninstall uninstall the service
    • --service start start the service
    • --service stop stop the service

    Then, vector must provide a service entry point and register itself with the Service Control Manager. A dependency to the windows-service has been introduced to handle service install and registration.

    The main function now has been split into two distinct main (old main #[cfg(not(windows))] and a new main for #[cfg(windows)].

    The new main for Windows first attempts to start as a service, and then falls back to console mode. The launch code has been extracted to a new service.rs file so that it could be called from the part that handles the startup of the service on Windows.

    I think this PR still needs some work. I'm pretty new to Rust (coming from a C++ background), so my coding style might not fit your standards. I also need some suggestion on how to properly handle and wait for the sources to shutdown properly before stopping the service on Windows (vector_windows.rs:60)

    This feature has been inspired from telegraf.

    platform: windows 
    opened by oktal 41
  • Dash

    Dash "-" not supporter in secret_key

    A note for the community

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    Problem

    Hello

    I have a test secret "ling-dev-elk-0". Vector is skipping secret resolution due to the dash in the name: No secret placeholder found, skipping secret resolution.

    "SECRET[az_secret.ling-dev-elk-0]" - not working

    I created second secret lingdevelk0 and it's working as expected.

    "SECRET[az_secret.lingdevelk0]" - working

    I did not find any info about the dashes support, only about the multiple dots: https://github.com/vectordotdev/vector/blob/master/rfcs/2022-02-24-11552-dd-agent-style-secret-management.md

    Configuration

    secret:
      az_secret:
        type: exec
        command:
          - "/etc/vector/secret.sh"
          - "-v"
          - "la-kv-02012023"
          - "-s"
          - "ling-dev-elk-0"
    sources:
      file:
        type: file
        include:
          - /etc/vector/1.log
        read_from: beginning
    transforms:
      add_field_from_secret:
        inputs: ["file"]
        type: remap
        source: |-
          .secret = "SECRET[az_secret.ling-dev-elk-0]"
    sinks:
      console:
        type: console
        inputs:
          - add_field_from_secret
        target: stdout
        encoding:
          codec: json
    

    Version

    vector 0.26.0 (x86_64-unknown-linux-gnu c6b5bc2 2022-12-05)

    Debug Output

    2023-01-02T14:59:33.605386Z  INFO vector::app: Internal log rate limit configured. internal_log_rate_secs=10
    2023-01-02T14:59:33.606507Z  INFO vector::app: Log level is enabled. level="vector=trace,codec=trace,vrl=trace,file_source=trace,tower_limit=trace,rdkafka=trace,buffers=trace,lapin=trace,kube=trace"
    2023-01-02T14:59:33.606647Z  INFO vector::app: Loading configs. paths=["vector.yaml"]
    2023-01-02T14:59:33.607383Z DEBUG vector::config::loading: No secret placeholder found, skipping secret resolution.
    2023-01-02T14:59:33.608237Z DEBUG vector::topology::builder: Building new source. component=file
    ...
    

    Example Data

    No response

    Additional Context

    No response

    References

    No response

    type: bug 
    opened by mulat666 0
  • Secret not working as a sinks config parameter

    Secret not working as a sinks config parameter

    A note for the community

    • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
    • If you are interested in working on this issue or have submitted a pull request, please leave a comment

    Problem

    Hello

    I would like to use secret value in the sinks config for Kafka password etc. Unfortunately it's not possible, Vector is failing with error message: Configuration error. error=Error while retrieving secret from backend "az_secret": secret for key 'lingdevelk0' was empty.

    When I'm not using secret in the sink part it's working fine. After transformation new field with correct value is visible. Test config below.

    Configuration

    secret:
      az_secret:
        type: exec
        command:
          - "/etc/vector/secret.sh"
          - "-v"
          - "la-kv-02012023"
          - "-s"
          - "lingdevelk0"
    sources:
      file:
        type: file
        include:
          - /etc/vector/1.log
        read_from: beginning
    transforms:
      add_field_from_secret:
        inputs: ["file"]
        type: remap
        source: |-
          .secret = "SECRET[az_secret.lingdevelk0]"
    sinks:
      console:
        type: console
        inputs:
          - add_field_from_secret
        target: stdout
        encoding:
          codec: json
      my_sink_id:
        type: kafka
        inputs:
          - add_field_from_secret
        bootstrap_servers: test.kafka.dev
        key_field: user_id
        topic: topic123
        compression: none
        encoding:
          codec: json
        librdkafka_options:
          "security.protocol": "sasl_ssl"
          "sasl.mechanism": "PLAIN"
          "sasl.username": "SECRET[az_secret.lingdevelk0]"
          "sasl.password": "SECRET[az_secret.lingdevelk0]"
    

    Version

    vector 0.26.0 (x86_64-unknown-linux-gnu c6b5bc2 2022-12-05)

    Debug Output

    Logs when I'm using secrets in the kafka sink.

    2023-01-02T14:18:34.828090Z  INFO vector::app: Internal log rate limit configured. internal_log_rate_secs=10
    2023-01-02T14:18:34.829859Z  INFO vector::app: Log level is enabled. level="vector=trace,codec=trace,vrl=trace,file_source=trace,tower_limit=trace,rdkafka=trace,buffers=trace,lapin=trace,kube=trace"
    2023-01-02T14:18:34.829929Z  INFO vector::app: Loading configs. paths=["vector.yaml"]
    2023-01-02T14:18:34.830734Z DEBUG vector::config::loading: Secret placeholders found, retrieving secrets from configured backends.
    2023-01-02T14:18:34.830765Z DEBUG vector::config::loading::secret: Retrieving secret from a backend. backend="az_secret"
    2023-01-02T14:18:37.299927Z ERROR vector::cli: Configuration error. error=Error while retrieving secret from backend "az_secret": secret for key 'lingdevelk0' was empty.
    

    Example Data

    No response

    Additional Context

    Logs when I'm not using kafka sink, secret is used only in transformations.

    2023-01-02T14:26:34.407790Z  INFO vector::app: Internal log rate limit configured. internal_log_rate_secs=10
    2023-01-02T14:26:34.408411Z  INFO vector::app: Log level is enabled. level="vector=trace,codec=trace,vrl=trace,file_source=trace,tower_limit=trace,rdkafka=trace,buffers=trace,lapin=trace,kube=trace"
    2023-01-02T14:26:34.408530Z  INFO vector::app: Loading configs. paths=["vector.yaml"]
    2023-01-02T14:26:34.409236Z DEBUG vector::config::loading: Secret placeholders found, retrieving secrets from configured backends.
    2023-01-02T14:26:34.409262Z DEBUG vector::config::loading::secret: Retrieving secret from a backend. backend="az_secret"
    2023-01-02T14:26:36.182540Z TRACE vector::config::loading::secret: Successfully retrieved a secret. backend="az_secret" secret_key="lingdevelk0"
    2023-01-02T14:26:36.184120Z DEBUG vector::topology::builder: Building new source. component=file
    ...
    

    References

    #15414

    type: bug 
    opened by mulat666 0
  • chore(deps): bump webbrowser from 0.8.2 to 0.8.4

    chore(deps): bump webbrowser from 0.8.2 to 0.8.4

    Bumps webbrowser from 0.8.2 to 0.8.4.

    Release notes

    Sourced from webbrowser's releases.

    v0.8.4

    Releasing v0.8.4 with the following changes:

    Fixed

    • Urgent bug fix for windows, where rendering broke on Firefox & Chrome. See #60

    v0.8.3

    Releasing v0.8.3 with the following changes:

    Added

    • Web browser is guaranteed to open for local files even if local file association was to a non-browser app (say an editor). This now is formally incorporated as part of this crate's Consistent Behaviour
    • WSL support, thanks to @​Nachtalb. This works even if wslu is not installed in WSL environments.
    • A new feature hardened now available for applications which require only http(s) urls to be opened. This acts as a security feature.

    Changed

    • On macOS, we now use CoreFoundation library instead of open command.
    • On Linux/*BSD, we now parse xdg configuration to execute the command directly, instead of using xdg-open command. This allows us to open the browser for local html files, even if the .html extension was associated with an edit (see #55)

    Fixed

    • The guarantee of web browser being opened (instead of local file association), now solves for the scenario where the URL is crafted to be an executable. This was reported privately by @​offalltn.
    Changelog

    Sourced from webbrowser's changelog.

    [0.8.4] - 2022-12-31

    Fixed

    • Urgent bug fix for windows, where rendering broke on Firefox & Chrome. See #60

    [0.8.3] - 2022-12-30

    Added

    • Web browser is guaranteed to open for local files even if local file association was to a non-browser app (say an editor). This now is formally incorporated as part of this crate's Consistent Behaviour
    • WSL support, thanks to @​Nachtalb. This works even if wslu is not installed in WSL environments.
    • A new feature hardened now available for applications which require only http(s) urls to be opened. This acts as a security feature.

    Changed

    • On macOS, we now use CoreFoundation library instead of open command.
    • On Linux/*BSD, we now parse xdg configuration to execute the command directly, instead of using xdg-open command. This allows us to open the browser for local html files, even if the .html extension was associated with an edit (see #55)

    Fixed

    • The guarantee of web browser being opened (instead of local file association), now solves for the scenario where the URL is crafted to be an executable. This was reported privately by @​offalltn.
    Commits
    • 1479be8 Release v0.8.4 [skip ci]
    • ff7b51e windows: move away from ShellExecuteEx to AssocQueryStringW
    • f617dbd Add SECURITY.md [skip ci]
    • f551bc1 fix release process - include firefox test screenshot.png in .gitignore [skip...
    • 3787624 changelog: fix consistent behaviour link [skip ci]
    • b910135 Release v0.8.3
    • bc60648 changelog: update for v0.8.3
    • 81dac1b better documentation of library guarantees
    • bbfabcc wsl: allow for disabling of wsl file impl via disable-wsl feature #build-linux
    • 3f5eb2c wsl: make WSL implementation compliant with the Consistent Behaviour for file...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps 
    opened by dependabot[bot] 4
  • chore(deps): bump infer from 0.11.0 to 0.12.0

    chore(deps): bump infer from 0.11.0 to 0.12.0

    Bumps infer from 0.11.0 to 0.12.0.

    Release notes

    Sourced from infer's releases.

    v0.12.0

    Changelog

    v0.12.0 - 2023-01-01

    Build

    • 67f2984 attempt to fix git-chglog installation in github release action by specifing latest version
    • 3a7eb25 attempt to fix git-chglog installation in github release action
    • dde7b3c update cargo version

    Features

    • 3a989e4 add support for OpenRaster (ora) format

    Commits

    • 67f2984 attempt to fix git-chglog installation in github release action by specifing latest version
    • 3a7eb25 attempt to fix git-chglog installation in github release action
    • dde7b3c update cargo version
    • eb9b60e Merge pull request #82 from Lynnesbian/master
    • 9b6d0b0 Merge pull request #81 from chrsmth/feat/ora
    • ef46e98 Merge pull request #80 from kailes/master
    • 0e6dacb add CPIO to README
    • 01bdb18 CPIO support
    • 9f9b90f Move OpenRaster to images
    • 3a989e4 add support for OpenRaster (ora) format
    • ae2dbec Fix infinite while loop when buf size is less than 3
    Commits
    • 67f2984 build: attempt to fix git-chglog installation in github release action by spe...
    • 3a7eb25 build: attempt to fix git-chglog installation in github release action
    • dde7b3c build: update cargo version
    • eb9b60e Merge pull request #82 from Lynnesbian/master
    • 9b6d0b0 Merge pull request #81 from chrsmth/feat/ora
    • ef46e98 Merge pull request #80 from kailes/master
    • 0e6dacb add CPIO to README
    • 01bdb18 CPIO support
    • 9f9b90f Move OpenRaster to images
    • 3a989e4 feat: add support for OpenRaster (ora) format
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps 
    opened by dependabot[bot] 4
  • chore(deps): bump cidr-utils from 0.5.9 to 0.5.10

    chore(deps): bump cidr-utils from 0.5.9 to 0.5.10

    Bumps cidr-utils from 0.5.9 to 0.5.10.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps 
    opened by dependabot[bot] 4
  • chore(deps): bump nom from 7.1.1 to 7.1.2

    chore(deps): bump nom from 7.1.1 to 7.1.2

    Bumps nom from 7.1.1 to 7.1.2.

    Changelog

    Sourced from nom's changelog.

    7.1.2 - 2023-01-01

    Thanks

    Changed

    • documentation fixes
    • tests fixes
    • limit the initial capacity of the result vector of many_m_n to 64kiB
    • bits parser now accept Parser implementors instead of only functions

    Added

    • implement Tuple parsing for the unit type as a special case
    • implement ErrorConvert on the unit type to make it usable as error type for bits parsers
    • bool parser for bits input
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps 
    opened by dependabot[bot] 4
Releases(v0.26.0)
Owner
Timber
High-performance, high-reliability observability data pipelines. A Datadog company.
Timber
High-performance runtime for data analytics applications

Weld Documentation Weld is a language and runtime for improving the performance of data-intensive applications. It optimizes across libraries and func

Weld 2.9k Dec 28, 2022
A cross-platform library to retrieve performance statistics data.

A toolkit designed to be a foundation for applications to monitor their performance.

Lark Technologies Pte. Ltd. 155 Nov 12, 2022
PostQuet: Stream PostgreSQL tables/queries to Parquet files seamlessly with this high-performance, Rust-based command-line tool.

STATUS: IN DEVELOPMENT PostQuet: Streaming PostgreSQL to Parquet Exporter PostQuet is a powerful and efficient command-line tool written in Rust that

Per Arneng 4 Apr 11, 2023
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023
This library provides a data view for reading and writing data in a byte array.

Docs This library provides a data view for reading and writing data in a byte array. This library requires feature(generic_const_exprs) to be enabled.

null 2 Nov 2, 2022
Rayon: A data parallelism library for Rust

Rayon Rayon is a data-parallelism library for Rust. It is extremely lightweight and makes it easy to convert a sequential computation into a parallel

null 7.8k Jan 8, 2023
Quickwit is a big data search engine.

Quickwit This repository will host Quickwit, the big data search engine developed by Quickwit Inc. We will progressively polish and opensource our cod

Quickwit Inc. 2.9k Jan 7, 2023
DataFrame / Series data processing in Rust

black-jack While PRs are welcome, the approach taken only allows for concrete types (String, f64, i64, ...) I'm not sure this is the way to go. I want

Miles Granger 30 Dec 10, 2022
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, written in Rust

Datafuse Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture Datafuse is a Real-Time Data Processing & Analytics DBMS wit

Datafuse Labs 5k Jan 4, 2023
ConnectorX - Fastest library to load data from DB to DataFrames in Rust and Python

ConnectorX enables you to load data from databases into Python in the fastest and most memory efficient way.

SFU Database Group 939 Jan 5, 2023
A highly efficient daemon for streaming data from Kafka into Delta Lake

A highly efficient daemon for streaming data from Kafka into Delta Lake

Delta Lake 172 Dec 23, 2022
TensorBase is a new big data warehousing with modern efforts.

TensorBase is a new big data warehousing with modern efforts.

null 1.3k Jan 4, 2023
Fill Apache Arrow record batches from an ODBC data source in Rust.

arrow-odbc Fill Apache Arrow arrays from ODBC data sources. This crate is build on top of the arrow and odbc-api crate and enables you to read the dat

Markus Klein 21 Dec 27, 2022
Analysis of Canadian Federal Elections Data

Canadian Federal Elections election is a small Rust program for processing vote data from Canadian Federal Elections. After building, see election --h

Colin Woodbury 2 Sep 26, 2021
📊 Cube.js — Open-Source Analytics API for Building Data Apps

?? Cube.js — Open-Source Analytics API for Building Data Apps

Cube.js 14.4k Jan 8, 2023
Provides a way to use enums to describe and execute ordered data pipelines. 🦀🐾

enum_pipline Provides a way to use enums to describe and execute ordered data pipelines. ?? ?? I needed a succinct way to describe 2d pixel map operat

Ben Greenier 0 Oct 29, 2021
Perhaps the fastest and most memory efficient way to pull data from PostgreSQL into pandas and numpy. 🚀

flaco Perhaps the fastest and most memory efficient way to pull data from PostgreSQL into pandas and numpy. ?? Have a gander at the initial benchmarks

Miles Granger 14 Oct 31, 2022
AppFlowy is an open-source alternative to Notion. You are in charge of your data and customizations

AppFlowy is an open-source alternative to Notion. You are in charge of your data and customizations. Built with Flutter and Rust.

null 30.7k Jan 7, 2023
An example repository on how to start building graph applications on streaming data. Just clone and start building 💻 💪

An example repository on how to start building graph applications on streaming data. Just clone and start building ?? ??

Memgraph 40 Dec 20, 2022