A high-performance observability data pipeline.

Overview

Get Started  •   Docs  •   Guides  •   Integrations  •   Chat  •   Download

Vector

What is Vector?

Vector is a high-performance, end-to-end (agent & aggregator) observability data platform that puts you in control of your observability data. Collect, transform, and route all your logs, metrics, and traces to any vendors you want today and any other vendors you may want tomorrow. Vector enables cost reduction, novel data enrichment, and data security when you need it, not when is most convenient for your vendors. 100% open source and up to 10x faster than every alternative.

To get started, follow our getting started guides or install Vector.

Principles

  • Reliable - Built in Rust, Vector's primary design goal is reliability.
  • End-to-end - Deploys as an agent or aggregator. Vector is a complete platform.
  • Unified - Logs, metrics, and traces (coming soon). One tool for all of your data.

Use cases

  • Reduce total observability costs.
  • Transition vendors without disrupting workflows.
  • Enhance data quality and improve insights.
  • Consolidate agents and eliminate agent fatigue.
  • Improve overall observability performance and reliability.

Community

  • Vector is relied on by startups and enterprises like Atlassian, T-Mobile, Comcast, Zendesk, Discord, Fastly, CVS, Trivago, Tuple, Douban, Visa, Mambu, Blockfi, Claranet, Instacart, Forcepoint, and many more.
  • Vector is downloaded over 100,000 times per day.
  • Vector's largest user processes over 30TB daily.
  • Vector has over 100 contributors and growing.

Documentation

About

Setup

Reference

Administration

Resources

Comparisons

Performance

The following performance tests demonstrate baseline performance between common protocols with the exception of the Regex Parsing test.

Test Vector Filebeat FluentBit FluentD Logstash SplunkUF SplunkHF
TCP to Blackhole 86mib/s n/a 64.4mib/s 27.7mib/s 40.6mib/s n/a n/a
File to TCP 76.7mib/s 7.8mib/s 35mib/s 26.1mib/s 3.1mib/s 40.1mib/s 39mib/s
Regex Parsing 13.2mib/s n/a 20.5mib/s 2.6mib/s 4.6mib/s n/a 7.8mib/s
TCP to HTTP 26.7mib/s n/a 19.6mib/s <1mib/s 2.7mib/s n/a n/a
TCP to TCP 69.9mib/s 5mib/s 67.1mib/s 3.9mib/s 10mib/s 70.4mib/s 7.6mib/s

To learn more about our performance tests, please see the Vector test harness.

Correctness

The following correctness tests are not exhaustive, but they demonstrate fundamental differences in quality and attention to detail:

Test Vector Filebeat FluentBit FluentD Logstash Splunk UF Splunk HF
Disk Buffer Persistence
File Rotate (create)
File Rotate (copytruncate)
File Truncation
Process (SIGHUP)
JSON (wrapped)

To learn more about our correctness tests, please see the Vector test harness.

Features

Vector is an end-to-end, unified, open data platform.

Vector Beats Fluentbit Fluentd Logstash Splunk UF Splunk HF
End-to-end
Agent
Aggregator
Unified
Logs
Metrics
Traces 🚧
Open
Open-source
Vendor-neutral
Reliability
Memory-safe
Delivery guarantees
Multi-core

= Not interoperable, metrics are represented as structured logs


Developed with ❤️ by Timber.io - Security Policy - Privacy Policy

Issues
  • perf: Tokio compat

    perf: Tokio compat

    This PR follows the proposal for #1142

    It switches our runtime to tokio 0.2 with very few changes, and the value in this is that we can run benchmarks and performance tests to ensure there's no degradation from upgrading to the new tokio reactor.

    Addresses #1695 and #1696.

    Current state (updated as we go):

    • [x] fd leak (https://github.com/tokio-rs/tokio-compat/issues/27)
    • [x] test_max_size and test_max_size_resume tests failing (it was happening from before, https://github.com/timberio/vector/pull/1922#issuecomment-597018268)
    • [x] test_source_panic (we concluded it was failing before, https://github.com/timberio/vector/pull/1922#issuecomment-594098317)
    • [x] sinks::statsd::test::test_send_to_statsd failing (see #2016 and #2026)
    • [x] benchmarks (see https://github.com/timberio/vector/pull/1922#issuecomment-594037080 and https://github.com/timberio/vector/pull/1922#issuecomment-594042060)
    • [x] test harness comparison (got raw data loaded and ready at S3, see https://github.com/timberio/vector/pull/1922#issuecomment-594114561 and https://github.com/timberio/vector/pull/1922#issuecomment-594120453; report: https://github.com/timberio/vector/pull/1922#issuecomment-597817765)
    • [x] additional comparison results (https://github.com/timberio/vector/pull/1922#issuecomment-600077012 and https://github.com/timberio/vector/pull/1922#issuecomment-600301648)

    There's a tokio-compat-debug branch that I use to dump the trashed code version altered with extensive debugging. I'm only using it to run the code against the CI, since my local setup doesn't reproduce the issues., and that branch isn't supposed to be merged. Rather, we'll just take the end results from it, if there are any.

    domain: performance type: enhancement 
    opened by MOZGIII 91
  • feat(new sink): Initial `pulsar` sink implementation

    feat(new sink): Initial `pulsar` sink implementation

    closes #690

    Missing tests and proper healthcheck right now, I just wanted to show this version to get some feedback. It's based largely on the kafka sink. It impl's Sink, like the kafka sink it holds the in flight futures and Acks back after completion.

    Let me know if this approach is suitable, if there are some configuration options missing we'd like to add, etc. I'd like to hold off on SSL at the moment because I don't think it's well supported in the underlying crate. This change depends on a change to SinkContext and pulsar-rs also, worth looking at that.

    opened by leshow 64
  • [Merged by Bors] - chore: Add a test implementation of bors

    [Merged by Bors] - chore: Add a test implementation of bors

    This PR enables https://bors.tech for our PR merge workflow. Now we run an abbreviated lint-centric check on the PR rather than the full test suite. We've achieved this by running the lint tests inside a container running the timberio/ci_image Docker image.

    We also consolidated and updated the image build process in environment.yml to keep the ci_image up-to-date.

    Then, instead of pressing merge, we use the bors commands to submit the PR to a staging branch where CI is run. The new CI run merges the prior e2e and tests jobs as a group. If these return green then the staging branch is fast-forwarded/merged into master.

    Signed-off-by: James Turnbull [email protected]

    domain: ci domain: tests 
    opened by jamtur01 49
  • feat(new source): Initial `kubernetes_logs` implementation

    feat(new source): Initial `kubernetes_logs` implementation

    This PR implements Kubernetes source according to #2222. There are a few things are different from the RFC:

    • the name for the source is kubernetes_logs - it's wasn't properly addressed at the RFC, but the logs and other source kinds for the kubernetes would very likely have very different internal design, so we should probably keep them as separate units; to disambiguate from the very beginning I modified the name to be kubernetes_logs;
    • had to bump MSKV to 1.15 - watch bookmarks are introduced in that version, and this is what we want to have to avoid extra desync issues.

    Done:

    • closes #2626;
    • closes #2627;
    • closes #2628;
    • closes #2629;
    • closes #2630;
    • closes #2631.

    To do (before the merge):

    • [x] shutdown logic fix;
    • [x] address the fingerprinting issue (#2890);
    • [x] move the annotation process to the create_event to optimize out access to the file field;
    • [x] ensure that MultiResponseDecoder doesn't leak memory;
    • [x] add new dependencies to the environment (minikube, skaffold, etc);
    • [x] bookmark parsing error (https://github.com/Arnavion/k8s-openapi/issues/70).

    To do (after the merge):

    • more configuration and knobs;
    • #2632 (integration tests);
    • #2633 (documentation).

    External refs:

    • https://github.com/aws/containers-roadmap/issues/809
    • https://github.com/aws/containers-roadmap/issues/61

    The overall approach was to build highly composable components, so that we can reuse them for further work - like for adding a sidecar-oriented transform for pod annotation, or exclusions based on namespace labels.

    opened by MOZGIII 49
  • feat(new transform): New `merge` transform

    feat(new transform): New `merge` transform

    Closes #1488.

    opened by MOZGIII 45
  • QA the new automatic concurrency feature

    QA the new automatic concurrency feature

    https://github.com/timberio/vector/pull/3094 introduces the ability for Vector to automatically determine the optimal in_flight_limit value. RFC 1858 outlines this feature. Given the automatic nature of this feature, we need to take care to QA this more than usual.

    Setup

    1. First, this should be a black box, end to end, integration style of testing. @bruceg covered unit tests in #3094. We want to test real-world usage as much as possible.
    2. While we could set up a real service, like Elasticsearch, I think we're better off creating a simple simulator that will give us more control over scenarios. This could build upon our http_test_server project.

    Questions

    We want to answer:

    1. How quickly does Vector find the optimal value given the http sink defaults?
    2. Does Vector correctly back off in the event of total service failure?
    3. Does Vector correctly back off in the event of gradually increasing response times?
    4. Does Vector correctly back off in the event of sudden sustained increasing response times?
    5. Does Vector correctly ignore one-off errors that are not indicative of service degradation? Such as one-off network errors. We want to make sure Vector does not overreact in these situations.
    domain: networking type: task 
    opened by binarylogic 44
  • feat(windows platform): add a way to register and run vector as a service

    feat(windows platform): add a way to register and run vector as a service

    On Windows, Vector should be able to run as a service.

    This must be done in two distinct steps:

    First, the user must register vector as a service in the Service Control Manager. A new --service command-line parameter has been introduced (windows only), with multiple possible values:

    • --service install install vector as a service
    • --service uninstall uninstall the service
    • --service start start the service
    • --service stop stop the service

    Then, vector must provide a service entry point and register itself with the Service Control Manager. A dependency to the windows-service has been introduced to handle service install and registration.

    The main function now has been split into two distinct main (old main #[cfg(not(windows))] and a new main for #[cfg(windows)].

    The new main for Windows first attempts to start as a service, and then falls back to console mode. The launch code has been extracted to a new service.rs file so that it could be called from the part that handles the startup of the service on Windows.

    I think this PR still needs some work. I'm pretty new to Rust (coming from a C++ background), so my coding style might not fit your standards. I also need some suggestion on how to properly handle and wait for the sources to shutdown properly before stopping the service on Windows (vector_windows.rs:60)

    This feature has been inspired from telegraf.

    platform: windows 
    opened by oktal 41
  • enhancement: Use new Lookup accessors on LogEvents and entire internal API.

    enhancement: Use new Lookup accessors on LogEvents and entire internal API.

    Closes https://github.com/timberio/vector/issues/2843 Closes https://github.com/timberio/vector/issues/2845 Closes https://github.com/timberio/vector/issues/4005

    This is a ... chonky PR.

    Most of it is trivial (and largely mechanized) changes to some basic interfaces around Event types, as well as a bunch of new code. Much of the old code remains in deprecated state for remap-lang to fully adopt.

    Notable developer facing changes

    • All Event, Value, and the new Lookup/LookupBuf types are now fully documented and tested, including several corner cases found during development.
    • From<String> and From<&str> are no longer implemented on Event.
      • The Event type was moved into shared and can no longer access the global log_schema. Yes, it was a lot of work.
    • LookupBuf, a new String analog which pre-parses lookup paths and allows rich remap-syntax for APIs, has been added.
    • Lookup, a new &str analog which pre-parses lookup paths and allows rich remap-syntax for APIs, has been added. It can be trivially used from LookupBuf, like a &str and a String.
    • The new LookupBuf type is deserializable, as well as serializable (like Lookup). This means Config structs can freely contain them and use them, without a conversion step.
    • The log_event! {} macro is much more prominent, use it, please!
    • Event and Value have (nearly) matching APIs around insert/remove etc. On Value these return a result, as the action is not valid on 'primitives'.
      • This was required to implement the recursive API the new Event API uses cleanly, and gives us a lot more options around working with things.
    • log_event.insert(String::from("foo.bar"), 1) will now insert at "foo.bar" the value 1, where prior it would insert "bar", set to 1, into "foo".
    • log_event.insert(LookupBuf::from_str("foo.bar")?, 1) will now insert "bar", set to 1, into "foo", as strings did prior.
    • The Event and Value types have been moved into shared to facilitate sharing between all our crates, making the type more globally accessible. They are re-exported in the old places, you shouldn't notice.
    • A Look and LookSegment trait are present to enforce parity between types, but it's preferred that the types implement functions directly, to ensure users always have function access (even without the traits imported).

    Reviewer Notes

    @JeanMertz and I agreed to make a conversion layer for this initial PR between Remap Paths and Lookups, this will be removed later.

    I suggest treating this PR in three separate chunks:

    • Review the changes to shared. It will introduce to you the new Event type and the related API.
    • Help comb over the changes inside vector to ensure we are not reading any raw data as Lookup types.
    • Suggest some new test cases for lookups or the API in general.

    TODOs

    • [ ] I want to do some more benching/profiling. I am unsure about the current Value::deserialize impl.
    • [x] Typify the errors.
    • [ ] Resolve last remap<->lookup inconsistencies
    domain: data model 
    opened by Hoverbear 39
  • New `tap` subcommand v1

    New `tap` subcommand v1

    As part of https://github.com/timberio/vector-product/issues/24, we want to introduce a way for users to tap into a running Vector instance and sample events.

    Goals

    1. Ability to sample events at any step within the configured pipeline.
    2. Do so in a way that will unblock the web UI. The web UI will have sampling ability.

    Proposal

    I propose that we introduce a new vector tap subcommand:

    $ vector help tap
    
    USAGE:
        vector tap [OPTIONS] [ID]
    
    OPTIONS
        --limit            Exit when this limit is reached (no default)
        --position         Whether to tap at the front or back of the component (default front)           
        --sample-rate      How often events should be sampled (default 1s)
    
    ARGS
        ID                 The component ID to tap.
    

    And usage is simple:

    $ vector tap my-files-source
    0" 406 26004
    86.143.250.111 - - [26/Jul/2020:13:09:43 -0400] "POST /engineer HTTP/1.1" 502 5575
    164.149.249.195 - - [26/Jul/2020:13:09:43 -0400] "POST /extend/incubate/roi HTTP/2.0" 100 11738
    57.57.164.242 - tillman5416 [26/Jul/2020:13:09:43 -0400] "HEAD /architectures/e-tailers/clicks-and-mortar HTTP/1.1" 416 14004
    234.49.218.76 - - [26/Jul/2020:13:09:43 -0400] "GET /relationships HTTP/1.0" 504 19698
    61.188.31.127 - - [26/Jul/2020:13:09:43 -0400] "HEAD /harness/cultivate/seamless/b2c HTTP/2.0" 205 18781
    22.61.249.23 - feest7241 [26/Jul/2020:13:09:43 -0400] "POST /partnerships/grow/impactful/deploy HTTP/2.0" 302 13686
    139.114.84.146 - schulist2106 [26/Jul/2020:13:09:43 -0400] "PUT /target/web-enabled/b2b HTTP/1.1" 403 26476
    

    And tapping the output:

    $ vector tap --position=back my-files-source
    {"timestamp": "...", "message": "86.143.250.111 - - [26/Jul/2020:13:09:43 -0400] "POST /engineer HTTP/1.1" 502 5575", "file": "..."}
    
    domain: cli domain: observability type: feature 
    opened by binarylogic 37
  • Support Freebsd

    Support Freebsd

    I would be great to be able to run vector in FreeBSD ( or any other *BSD ). So I tried to do the manual steps and encountered a few errors. The first one was solved by using gnu-make (gmake) instead of bsd-make. I am not familiar with Rust, but I hope that this can help bring vector to FreeBSD. (it shouldn't be too far from MacOS support, I hope).

    Platform: FreeBSD 11.2-RELEASE-p10 cargo 1.35.0 vector Source Version: 0.4.0-dev Env options: RUST_BACKTRACE=1

    error:

     Compiling jemalloc-sys v0.3.0
    error: failed to run custom build command for `leveldb-sys v2.0.1`
    process didn't exit successfully: `/tmp/vector/vector-master/target/debug/build/leveldb-sys-cd5040109e6341dc/build-script-build` (exit code: 101)
    --- stdout
    [build] Started
    [snappy] Cleaning
    test -z "libsnappy.la" || rm -f libsnappy.la
    rm -f "./so_locations"
    rm -rf .libs _libs
     rm -f snappy_unittest
    rm -f *.o
    rm -f *.lo
    rm -f *.tab.c
    test -z "snappy-stubs-public.h" || rm -f snappy-stubs-public.h
    test . = "." || test -z "" || rm -f
    rm -f config.h stamp-h1
    rm -f libtool config.lt
    rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
    rm -f config.status config.cache config.log  configure.lineno config.status.lineno
    rm -rf ./.deps
    rm -f Makefile
    [snappy] Configuring
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... ./install-sh -c -d
    checking for gawk... no
    checking for mawk... no
    checking for nawk... nawk
    checking whether make sets $(MAKE)... yes
    checking build system type... x86_64-unknown-freebsd11.2
    checking host system type... x86_64-unknown-freebsd11.2
    checking how to print strings... printf
    checking for style of include used by make... GNU
    checking for gcc... no
    checking for cc... cc
    checking whether the C compiler works... yes
    checking for C compiler default output file name... a.out
    checking for suffix of executables...
    checking whether we are cross compiling... no
    checking for suffix of object files... o
    checking whether we are using the GNU C compiler... yes
    checking whether cc accepts -g... yes
    checking for cc option to accept ISO C89... none needed
    checking dependency style of cc... gcc3
    checking for a sed that does not truncate output... /usr/bin/sed
    checking for grep that handles long lines and -e... /usr/bin/grep
    checking for egrep... /usr/bin/grep -E
    checking for fgrep... /usr/bin/grep -F
    checking for ld used by cc... /usr/bin/ld
    checking if the linker (/usr/bin/ld) is GNU ld... yes
    checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm
    checking the name lister (/usr/bin/nm) interface... BSD nm
    checking whether ln -s works... yes
    checking the maximum length of command line arguments... 196608
    checking whether the shell understands some XSI constructs... yes
    checking whether the shell understands "+="... no
    checking how to convert x86_64-unknown-freebsd11.2 file names to x86_64-unknown-freebsd11.2 format... func_convert_file_noop
    checking how to convert x86_64-unknown-freebsd11.2 file names to toolchain format... func_convert_file_noop
    checking for /usr/bin/ld option to reload object files... -r
    checking for objdump... objdump
    checking how to recognize dependent libraries... pass_all
    checking for dlltool... no
    checking how to associate runtime and link libraries... printf %s\n
    checking for ar... ar
    checking for archiver @FILE support... no
    checking for strip... strip
    checking for ranlib... ranlib
    checking command to parse /usr/bin/nm output from cc object... ok
    checking for sysroot... no
    checking for mt... mt
    checking if mt is a manifest tool... no
    checking how to run the C preprocessor... cc -E
    checking for ANSI C header files... yes
    checking for sys/types.h... yes
    checking for sys/stat.h... yes
    checking for stdlib.h... yes
    checking for string.h... yes
    checking for memory.h... yes
    checking for strings.h... yes
    checking for inttypes.h... yes
    checking for stdint.h... yes
    checking for unistd.h... yes
    checking for dlfcn.h... yes
    checking for objdir... .libs
    checking if cc supports -fno-rtti -fno-exceptions... yes
    checking for cc option to produce PIC... -fPIC -DPIC
    checking if cc PIC flag -fPIC -DPIC works... yes
    checking if cc static flag -static works... yes
    checking if cc supports -c -o file.o... yes
    checking if cc supports -c -o file.o... (cached) yes
    checking whether the cc linker (/usr/bin/ld) supports shared libraries... yes
    checking whether -lc should be explicitly linked in... no
    checking dynamic linker characteristics... freebsd11.2 ld.so
    checking how to hardcode library paths into programs... immediate
    checking whether stripping libraries is possible... no
    checking if libtool supports shared libraries... yes
    checking whether to build shared libraries... yes
    checking whether to build static libraries... yes
    checking for g++... no
    checking for c++... c++
    checking whether we are using the GNU C++ compiler... yes
    checking whether c++ accepts -g... yes
    checking dependency style of c++... gcc3
    checking how to run the C++ preprocessor... c++ -E
    checking for ld used by c++... /usr/bin/ld
    checking if the linker (/usr/bin/ld) is GNU ld... yes
    checking whether the c++ linker (/usr/bin/ld) supports shared libraries... yes
    checking for c++ option to produce PIC... -fPIC -DPIC
    checking if c++ PIC flag -fPIC -DPIC works... yes
    checking if c++ static flag -static works... yes
    checking if c++ supports -c -o file.o... yes
    checking if c++ supports -c -o file.o... (cached) yes
    checking whether the c++ linker (/usr/bin/ld) supports shared libraries... yes
    checking dynamic linker characteristics... freebsd11.2 ld.so
    checking how to hardcode library paths into programs... immediate
    checking whether byte ordering is bigendian... no
    checking for size_t... yes
    checking for ssize_t... yes
    checking for stdint.h... (cached) yes
    checking stddef.h usability... yes
    checking stddef.h presence... yes
    checking for stddef.h... yes
    checking sys/mman.h usability... yes
    checking sys/mman.h presence... yes
    checking for sys/mman.h... yes
    checking sys/resource.h usability... yes
    checking sys/resource.h presence... yes
    checking for sys/resource.h... yes
    checking windows.h usability... no
    checking windows.h presence... no
    checking for windows.h... no
    checking byteswap.h usability... no
    checking byteswap.h presence... no
    checking for byteswap.h... no
    checking sys/byteswap.h usability... no
    checking sys/byteswap.h presence... no
    checking for sys/byteswap.h... no
    checking sys/endian.h usability... yes
    checking sys/endian.h presence... yes
    checking for sys/endian.h... yes
    checking sys/time.h usability... yes
    checking sys/time.h presence... yes
    checking for sys/time.h... yes
    checking for mmap... yes
    checking for 'gtest-config'... checking for gtest-config... no
    no
    checking for pkg-config... no
    checking for gflags... no
    checking if the compiler supports __builtin_expect... yes
    checking if the compiler supports __builtin_ctzll... yes
    checking for zlibVersion in -lz... yes
    checking for lzo1x_1_15_compress in -llzo2... no
    checking for lzf_compress in -llzf... no
    checking for fastlz_compress in -lfastlz... no
    checking for qlz_compress in -lquicklz... no
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating snappy-stubs-public.h
    config.status: creating config.h
    config.status: executing depfiles commands
    config.status: executing libtool commands
    [snappy] Building
    make  all-am
    /bin/sh ./libtool --tag=CXX    --mode=compile c++ -DHAVE_CONFIG_H -I.      -fPIC -MT snappy.lo -MD -MP -MF .deps/snappy.Tpo -c -o snappy.lo snappy.cc
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy.lo -MD -MP -MF .deps/snappy.Tpo -c snappy.cc  -fPIC -DPIC -o .libs/snappy.o
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy.lo -MD -MP -MF .deps/snappy.Tpo -c snappy.cc -o snappy.o >/dev/null 2>&1
    mv -f .deps/snappy.Tpo .deps/snappy.Plo
    /bin/sh ./libtool --tag=CXX    --mode=compile c++ -DHAVE_CONFIG_H -I.      -fPIC -MT snappy-sinksource.lo -MD -MP -MF .deps/snappy-sinksource.Tpo -c -o snappy-sinksource.lo snappy-sinksource.cc
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy-sinksource.lo -MD -MP -MF .deps/snappy-sinksource.Tpo -c snappy-sinksource.cc  -fPIC -DPIC -o .libs/snappy-sinksource.o
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy-sinksource.lo -MD -MP -MF .deps/snappy-sinksource.Tpo -c snappy-sinksource.cc -o snappy-sinksource.o >/dev/null 2>&1
    mv -f .deps/snappy-sinksource.Tpo .deps/snappy-sinksource.Plo
    /bin/sh ./libtool --tag=CXX    --mode=compile c++ -DHAVE_CONFIG_H -I.      -fPIC -MT snappy-stubs-internal.lo -MD -MP -MF .deps/snappy-stubs-internal.Tpo -c -o snappy-stubs-internal.lo snappy-stubs-internal.cc
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy-stubs-internal.lo -MD -MP -MF .deps/snappy-stubs-internal.Tpo -c snappy-stubs-internal.cc  -fPIC -DPIC -o .libs/snappy-stubs-internal.o
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy-stubs-internal.lo -MD -MP -MF .deps/snappy-stubs-internal.Tpo -c snappy-stubs-internal.cc -o snappy-stubs-internal.o >/dev/null 2>&1
    mv -f .deps/snappy-stubs-internal.Tpo .deps/snappy-stubs-internal.Plo
    /bin/sh ./libtool --tag=CXX    --mode=compile c++ -DHAVE_CONFIG_H -I.      -fPIC -MT snappy-c.lo -MD -MP -MF .deps/snappy-c.Tpo -c -o snappy-c.lo snappy-c.cc
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy-c.lo -MD -MP -MF .deps/snappy-c.Tpo -c snappy-c.cc  -fPIC -DPIC -o .libs/snappy-c.o
    libtool: compile:  c++ -DHAVE_CONFIG_H -I. -fPIC -MT snappy-c.lo -MD -MP -MF .deps/snappy-c.Tpo -c snappy-c.cc -o snappy-c.o >/dev/null 2>&1
    mv -f .deps/snappy-c.Tpo .deps/snappy-c.Plo
    /bin/sh ./libtool --tag=CXX    --mode=link c++   -fPIC -version-info 3:1:2  -o libsnappy.la -rpath /usr/local/lib snappy.lo snappy-sinksource.lo  snappy-stubs-internal.lo snappy-c.lo
    libtool: link: c++  -fPIC -DPIC -shared -nostdlib /usr/lib/crti.o /usr/lib/crtbeginS.o  .libs/snappy.o .libs/snappy-sinksource.o .libs/snappy-stubs-internal.o .libs/snappy-c.o   -L/usr/lib -lc++ -lm -lc -lgcc -lgcc_s /usr/lib/crtendS.o /usr/lib/crtn.o    -Wl,-soname -Wl,libsnappy.so.3 -o .libs/libsnappy.so.3
    libtool: link: (cd ".libs" && rm -f "libsnappy.so" && ln -s "libsnappy.so.3" "libsnappy.so")
    libtool: link: (cd ".libs" && rm -f "libsnappy.so" && ln -s "libsnappy.so.3" "libsnappy.so")
    libtool: link: ar cru .libs/libsnappy.a  snappy.o snappy-sinksource.o snappy-stubs-internal.o snappy-c.o
    libtool: link: ranlib .libs/libsnappy.a
    libtool: link: ( cd ".libs" && rm -f "libsnappy.la" && ln -s "../libsnappy.la" "libsnappy.la" )
    c++ -DHAVE_CONFIG_H -I.      -fPIC -MT snappy_unittest-snappy_unittest.o -MD -MP -MF .deps/snappy_unittest-snappy_unittest.Tpo -c -o snappy_unittest-snappy_unittest.o `test -f 'snappy_unittest.cc' || echo './'`snappy_unittest.cc
    mv -f .deps/snappy_unittest-snappy_unittest.Tpo .deps/snappy_unittest-snappy_unittest.Po
    c++ -DHAVE_CONFIG_H -I.      -fPIC -MT snappy_unittest-snappy-test.o -MD -MP -MF .deps/snappy_unittest-snappy-test.Tpo -c -o snappy_unittest-snappy-test.o `test -f 'snappy-test.cc' || echo './'`snappy-test.cc
    mv -f .deps/snappy_unittest-snappy-test.Tpo .deps/snappy_unittest-snappy-test.Po
    /bin/sh ./libtool --tag=CXX    --mode=link c++   -fPIC   -o snappy_unittest snappy_unittest-snappy_unittest.o  snappy_unittest-snappy-test.o libsnappy.la -lz
    libtool: link: c++ -fPIC -o .libs/snappy_unittest snappy_unittest-snappy_unittest.o snappy_unittest-snappy-test.o  ./.libs/libsnappy.so -lz -Wl,-rpath -Wl,/usr/local/lib
    [build] Copying output files
    [build] Copying the `build_detect_platform` template
    [leveldb] Cleaning
    
    make[1]: stopped in /root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18
    [leveldb] Building command
    [leveldb] Building
    
    make[1]: stopped in /root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18
    [leveldb] Build finished
    [build] Copying output files
    
    --- stderr
    In file included from snappy_unittest.cc:39:
    ./snappy-test.h:135:20: warning: control reaches end of non-void function [-Wreturn-type]
      int Defaults() { }
                       ^
    ./snappy-test.h:161:3: warning: control may reach end of non-void function [-Wreturn-type]
      }
      ^
    ./snappy-test.h:179:3: warning: control may reach end of non-void function [-Wreturn-type]
      }
      ^
    snappy_unittest.cc:165:1: warning: control may reach end of non-void function [-Wreturn-type]
    }
    ^
    snappy_unittest.cc:959:24: warning: implicit conversion from 'int' to 'std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::value_type' (aka 'char') changes value from 128 to -128 [-Wconstant-conversion]
      compressed.push_back(128);
                 ~~~~~~~~~ ^~~
    snappy_unittest.cc:960:24: warning: implicit conversion from 'int' to 'std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::value_type' (aka 'char') changes value from 128 to -128 [-Wconstant-conversion]
      compressed.push_back(128);
                 ~~~~~~~~~ ^~~
    snappy_unittest.cc:961:24: warning: implicit conversion from 'int' to 'std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::value_type' (aka 'char') changes value from 128 to -128 [-Wconstant-conversion]
      compressed.push_back(128);
                 ~~~~~~~~~ ^~~
    snappy_unittest.cc:962:24: warning: implicit conversion from 'int' to 'std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::value_type' (aka 'char') changes value from 128 to -128 [-Wconstant-conversion]
      compressed.push_back(128);
                 ~~~~~~~~~ ^~~
    snappy_unittest.cc:963:24: warning: implicit conversion from 'int' to 'std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::value_type' (aka 'char') changes value from 128 to -128 [-Wconstant-conversion]
      compressed.push_back(128);
                 ~~~~~~~~~ ^~~
    9 warnings generated.
    In file included from snappy-test.cc:31:
    ./snappy-test.h:135:20: warning: control reaches end of non-void function [-Wreturn-type]
      int Defaults() { }
                       ^
    ./snappy-test.h:161:3: warning: control may reach end of non-void function [-Wreturn-type]
      }
      ^
    ./snappy-test.h:179:3: warning: control may reach end of non-void function [-Wreturn-type]
      }
      ^
    3 warnings generated.
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 19: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 21: Could not find build_config.mk
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 36: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 38: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 74: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 76: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 81: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 93: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 98: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 198: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 221: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 227: Need an operator
    make[1]: Fatal errors encountered -- cannot continuemake[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 19: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 21: Could not find build_config.mk
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 36: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 38: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 74: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 76: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 81: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 93: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 98: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 198: Missing dependency operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 221: Need an operator
    make[1]: "/root/.cargo/registry/src/github.com-1ecc6299db9ec823/leveldb-sys-2.0.1/deps/leveldb-1.18/Makefile" line 227: Need an operator
    make[1]: Fatal errors encountered -- cannot continuethread 'main' panicked at 'copy of output files failed', src/libcore/option.rs:1034:5
    stack backtrace:
       0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
       1: std::sys_common::backtrace::print
       2: std::panicking::default_hook::{{closure}}
       3: std::panicking::default_hook
       4: std::panicking::rust_panic_with_hook
       5: std::panicking::continue_panic_fmt
       6: rust_begin_unwind
       7: core::panicking::panic_fmt
       8: core::option::expect_failed
       9: core::option::Option<T>::expect
                 at /wrkdirs/usr/ports/lang/rust/work/rustc-1.35.0-src/src/libcore/option.rs:312
      10: build_script_build::build_leveldb
                 at ./src/build.rs:115
      11: build_script_build::main
                 at ./src/build.rs:178
      12: std::rt::lang_start::{{closure}}
                 at /wrkdirs/usr/ports/lang/rust/work/rustc-1.35.0-src/src/libstd/rt.rs:64
      13: std::panicking::try::do_call
      14: __rust_maybe_catch_panic
                 at src/libpanic_unwind/lib.rs:87
      15: std::rt::lang_start_internal
      16: std::rt::lang_start
                 at /wrkdirs/usr/ports/lang/rust/work/rustc-1.35.0-src/src/libstd/rt.rs:64
      17: main
      18: _start
      19: <unknown>
    
    warning: build failed, waiting for other jobs to finish...
    error: build failed
    gmake: *** [Makefile:22: build] Error 101
    
    type: task 
    opened by sjolicoeur 35
  • chore(deps): bump anyhow from 1.0.40 to 1.0.41

    chore(deps): bump anyhow from 1.0.40 to 1.0.41

    Bumps anyhow from 1.0.40 to 1.0.41.

    Release notes

    Sourced from anyhow's releases.

    1.0.41

    Commits
    • b4f670d Release 1.0.41
    • 8bf68c8 Merge pull request 155 from jfirebaugh/patch-1
    • 254c3b6 Depend on a recent-enough version of backtrace
    • 29e4e0e Update ui test suite to nightly-2021-05-14
    • aa6c83d Resolve branches_sharing_code clippy lint
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    domain: deps 
    opened by dependabot[bot] 0
  • Add `assert` and `assert_eq` functions to VRL to for use in unit tests

    Add `assert` and `assert_eq` functions to VRL to for use in unit tests

    These functions will make it more straightforward to write unit tests, but could also be used in remap transforms.

    Ideally the output of these would clearly highlight what assertion failed.

    domain: remap domain: unit tests type: enhancement 
    opened by jszwedko 0
  • Rewrite documentation using Remap conditions as examples

    Rewrite documentation using Remap conditions as examples

    The current docs still use the check_fields syntax.

    https://vector.dev/docs/reference/configuration/tests/ https://vector.dev/guides/level-up/unit-testing/

    These should be updated to use the VRL-style checks.

    domain: external docs domain: remap domain: unit tests type: task 
    opened by jszwedko 0
  • enhancement(tests): Create affinity pod in separate namespace for k8s tests

    enhancement(tests): Create affinity pod in separate namespace for k8s tests

    As noted in #7798 there is the potential for flakiness in tests.

    Some tests create a pod purely for the purpose of other tests to have affinity to to ensure they are all deployed on the same node. However these tests also scan the logs for any pods in a given namespace to test various things. If the affinity pod is created in this namespace there is the potential for any logs they create to interfere with the tests.

    This PR creates these affinity pods in a separate namespace so they don't have the risk of getting in the way.

    Signed-off-by: Stephen Wakely [email protected]

    ci-condition: k8s e2e all targets ci-condition: k8s e2e tests enable 
    opened by StephenWakely 0
  • Failed parsing of default apache error log by `parse_apache_log`

    Failed parsing of default apache error log by `parse_apache_log`

    A user reported that their default apache error logs were not being correctly parsed due to lack of a thread id in the log message.

    Example:

    $ parse_apache_log!(s'[2021-06-04 15:40:27.138633] [php7:emerg] [pid 4803] [client 95.223.77.60:35106] PHP Parse error:  syntax error, unexpected \'->\' (T_OBJECT_OPERATOR) in /var/www/prod/releases/master-c7225365fd9faa26262cffeeb57b31bd7448c94a/source/index.php on line 14', timestamp_format: "%Y-%m-%d %H:%M:%S.%f", format: "error")
    function call error for "parse_apache_log" at (0:333): failed parsing common log line```
    
    domain: remap type: bug 
    opened by jszwedko 0
  • log_to_metrics docs should give examples of all metric types

    log_to_metrics docs should give examples of all metric types

    Looking at the docs for the log_to_metric transform, it's not clear how to specify the buckets in the output for a histogram metrics. I'd love to see the docs expanded to show how to specify these, and probably would be best to provide an example for all metric types with all fields specified.

    domain: external docs transform: log_to_metric type: enhancement 
    opened by jkodroff 0
  • [UnitTest] Tests with `no_outputs_from` and a non-existent transform should not pass

    [UnitTest] Tests with `no_outputs_from` and a non-existent transform should not pass

    Vector Version

    vector 0.14.0 (x86_64-unknown-linux-gnu 5f3a319 2021-06-03)
    

    Bug Description

    Consider the following (incomplete to run but complete to test) Vector configuration:

    transforms:
      my-transform:
        type: remap
        inputs:
          - no-source
        source: |-
          .message = "hello"
    
    tests:
      - name: expected-output
        inputs:
          - insert_at: my-transform
            type: log
            log_fields: {}
        outputs:
          - extract_from: my-transform
            conditions:
              - .message == "hello"
      - name: no-output
        inputs:
          - insert_at: my-transform
            type: log
            log_fields: {}
        no_outputs_from:
          - non-existent
    

    Note the following points about the above configuration:

    • There are no sources or sinks defined. This configuration is not valid to run, but still can be used to run unit tests.
    • The second unit test no-output declares that it expects no output from a non-existent stage (which does not exist).

    Now consider the following workflows:

    $ vector validate test.yaml
    Failed to load ["test.yaml"]
    ----------------------------
    x No sources defined in the config.
    x No sinks defined in the config.
    x Input "no-source" for transform "my-transform" doesn't exist.
    

    What we expected to see in this workflow?

    • There should be at least a warning that a unit test is trying to interact with a transform that does not exist.
    $ vector test test.yaml
    Running tests
    test expected-output ... passed
    test no-output ... passed
    

    What we expected to see in this workflow?

    • The second test no-output should have not passed as there is no non-existent transform defined.

    While the first expectancy is rather "soft" (informative), we think that the second one is quite important because the passing test is misleading us to think that the no-output test is correct when in fact is not testing what we wanted.

    We got into this issue recently when we refactored one of our Vector configurations and forgot to rename the transform down in the unit tests section. Because the test with no_outputs_from is still passing, we never noticed it was using the wrong transform name as input.

    It can be argued that passing a test when no_outputs_from a non-existent transform is correct (because a non-existent transform in fact produces no output), however we believe that it can lead to unnoticed mistakes in tests too :(

    type: bug 
    opened by hhromic 2
  • Sources, transforms, sinks links in your footer resulted in a 404 error

    Sources, transforms, sinks links in your footer resulted in a 404 error

    Vector Version

    ...
    

    Vector Configuration File

    ...
    

    Debug Output

    Expected Behavior

    Actual Behavior

    Example Data

    Additional Context

    References

    domain: external docs type: bug 
    opened by natea 1
  • Build dynamic sources

    Build dynamic sources

    Use-cases

    Dynamically generate sources for new ressources:

    • K8s pods/containers that need more that just stdout logs
    • Service published inside directory like consul or etcd

    Proposal

    Vector could:

    • read annotation or label for k8s (could also serve as an on/off flag for generic pods logs collection)
    • read labels for docker container
    • read metadata in selected service discovery providers
    • etc.

    And create the source accordingly.

    type: feature 
    opened by prognant 0
  • enhancement(file source): Add support for acknowledgements

    enhancement(file source): Add support for acknowledgements

    Closes #7459

    It looks like a fair chunk of the asynchronous acknowledgement framework can be reused between this and the Kafka source work #7787, but that's not reviewed yet. Once this is merged, supporting this in the Kubernetes source should be pretty short work. This also does no batching of events, so the performance is likely to plummet once acknowledgements are turned on. The file server already reads lines in batches, and we now expose a nice structure that could make this possibly without major reworks to the sources, but that was beyond the scope of this PR.

    source: file type: enhancement 
    opened by bruceg 2
Releases(v0.14.0)
Owner
Timber
🌲 Log better. Solve problems faster.
Timber