telemetry aggregation and shipping, last up the ladder

Overview

cernan - telemetry aggregation and shipping, last up the ladder

Build Status Codecov

Eugene Cernan, Apollo 17 EVA

Cernan is a telemetry and logging aggregation server. It exposes multiple interfaces for ingestion and can emit to multiple aggregation sources while doing in-flight manipulation of data. Cernan has minimal CPU and memory requirements and is intended to service bursty telemetry without load shedding. Cernan aims to be reliable and convenient to use, both for application engineers and operations staff.

Why you might choose to use cernan:

  • You need to ingest telemetry from multiple protocols.
  • You need to multiplex telemetry over aggregation services.
  • You want to convert log lines into telemetry.
  • You want to convert telemetry into log lines.
  • You want to transform telemetry or log lines in-flight.

If you'd like to learn more, please do have a look in our wiki.

Quickstart

To build cernan you will need to have Rust. This should be as simple as:

> curl -sSf https://static.rust-lang.org/rustup.sh | sh

Once Rust is installed, from the root of this project run:

> cargo run -- --config examples/configs/quickstart.toml

and you're good to go. Cernan will report to stdout what ports it is now listening on. If you would like to debug your service--to determine if the telemetry you intend is issued--run cernan like:

> cargo run -- -vvvv --config examples/configs/quickstart.toml

and full trace output will be reported to stdout.

Usage

The cernan server has a few command-line toggles to control its behaviour:

-C, --config <config>    The config file to feed in.
-v               Turn on verbose output.

The verbose flag -v allows multiples, each addition cranking up the verbosity by one. So:

  • -v -- error, warning
  • -vv -- error, warning, info
  • -vvv -- error, warning, info, debug
  • -vvvv -- error, warning, info, debug, trace

License

cernan is copyright © 2017-2018 Postmates, Inc and released to the public under the terms of the MIT license.

Comments
  • Issues with the native source

    Issues with the native source

    I tried to make a script that would allow reporting arbitrary metrics using a native source: https://gist.github.com/doubleyou/1e75fff02ff26b9a9f2fd5ca6c6d3417

    It's a combination of the compiled .proto file and an actual script that implements CLI and network.

    Payload length is calculated in accordance with https://docs.python.org/2/library/struct.html and https://github.com/postmates/cernan/blob/master/resources/protobufs/native.proto. The rest is pretty much packing a bunch of values into a protobuf packet.

    Two issues arise when I try to test it:

    1. Upon start, cernan reports the following in the logs:
    cernan_1             | thread '<unnamed>' panicked at 'Unable to bind to TCP socket: Error { repr: Os { code: 98, message: "Address already in use" } }', /checkout/src/libcore/result.rs:906:4
    cernan_1             | note: Run with `RUST_BACKTRACE=1` for a backtrace.
    cernan_1             | thread '<unnamed>' panicked at 'Unable to bind to TCP socket: Error { repr: Os { code: 98, message: "Address already in use" } }', /checkout/src/libcore/result.rs:906:4
    

    It's worth noting that changing the port doesn't affect that. Seems to be a race condition or something?

    1. However, seems like later on cernan actually starts listening on the port, because when I run the script (my specific example: ./houston.py service.restarts 1.0 1 service=test 1.0), I see the following error in the logs:
    cernan_1             | thread '<unnamed>' panicked at 'index out of bounds: the len is 0 but the index is 0', /checkout/src/liballoc/vec.rs:1561:14
    

    Adding RUST_BACKTRACE=full didn't really help, I'll probably try to rebuild cernan with debug info, at least to see where the actual calls are happening.

    opened by doubleyou 24
  • Statsd timings seem to break cernan

    Statsd timings seem to break cernan

    Steps to reproduce:

    1. Build cernan (https://github.com/postmates/cernan/commit/a4da52dc93eef606a3667dc6ad664e5d7248937f would work, but I have reproduced this on older versions too).

    2. Start it with the following config:

    data-directory = "/tmp/cernan"
    
    flush-interval = 2
    
    [sources]
    
      [sources.statsd.primary]
      enabled = true
      port = 8125
      forwards = ["sinks.console"]
    
        [sources.statsd.primary.mapping.default]
        mask = ".*"
        bounds = [0.0, 0.1, 10.0, 100.0]
    
    [sinks]
      [sinks.console]
      bin_width = 1
    
    1. pip install git+https://github.com/postmates/pystatsd (virtualenv recommended).

    2. In Python repl, run the following:

    import pystatsd
    pystatsd.timing('test.timing', 50)
    

    What happens

    Output with RUST_STACKTRACE=full:

    cernan_1             | thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: CannotSetError', /checkout/src/libcore/result.rs:906:4
    cernan_1             | stack backtrace:
    cernan_1             |    0:     0x563ec43a0063 - std::sys::imp::backtrace::tracing::imp::unwind_backtrace::h8ed7485deb8ab958
    cernan_1             |                                at /checkout/src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
    cernan_1             |    1:     0x563ec439dd48 - std::panicking::default_hook::{{closure}}::h0088fe51b67c687c
    cernan_1             |                                at /checkout/src/libstd/sys_common/backtrace.rs:69
    cernan_1             |                                at /checkout/src/libstd/sys_common/backtrace.rs:58
    cernan_1             |                                at /checkout/src/libstd/panicking.rs:381
    cernan_1             |    2:     0x563ec439d1a5 - std::panicking::rust_panic_with_hook::h25b934bb4484e9e0
    cernan_1             |                                at /checkout/src/libstd/panicking.rs:397
    cernan_1             |                                at /checkout/src/libstd/panicking.rs:577
    cernan_1             |    3:     0x563ec439cc28 - std::panicking::begin_panic::h59483e27e93d7bc6
    cernan_1             |                                at /checkout/src/libstd/panicking.rs:538
    cernan_1             |    4:     0x563ec439cba9 - std::panicking::begin_panic_fmt::h5f221297e8a3dbdb
    cernan_1             |                                at /checkout/src/libstd/panicking.rs:522
    cernan_1             |    5:     0x563ec43b489a - core::panicking::panic_fmt::h4d1ab9bae1f32475
    cernan_1             |                                at /checkout/src/libstd/panicking.rs:498
    cernan_1             |    6:     0x563ec4026b1b - core::result::unwrap_failed::h8824197d9878f263
    cernan_1             |    7:     0x563ec410d124 - std::sys_common::backtrace::__rust_begin_short_backtrace::hb9f913159e607447
    cernan_1             |    8:     0x563ec410a986 - <F as alloc::boxed::FnBox<A>>::call_box::hac99f2e9ac949407
    cernan_1             |    9:     0x563ec43b0d07 - std::sys::imp::thread::Thread::new::thread_start::hbaf1b5aa1ca8e3ea
    cernan_1             |                                at /checkout/src/liballoc/boxed.rs:736
    cernan_1             |                                at /checkout/src/libstd/sys_common/thread.rs:24
    cernan_1             |                                at /checkout/src/libstd/sys/unix/thread.rs:90
    cernan_1             |   10:     0x7f5d9e3ab6b9 - start_thread
    cernan_1             |   11:     0x7f5d9decb3dc - clone
    cernan_1             |   12:                0x0 - <unknown>
    

    Interestingly, when I use an InfluxDB sink, cernan stops reporting data there at all, once this error occurs. It keeps printing things out to the console though.

    Also, interestingly, I tried to reproduce this on postal-console-stage and things didn't break there. Hard to tell how much the versions of pystatsd and cernan differ there. If anything, pystatsd way of serializing timings seems legit: https://github.com/postmates/pystatsd/blob/master/pystatsd/pystatsd.py#L179.

    opened by doubleyou 11
  • RFC: systemd-journald source

    RFC: systemd-journald source

    This is getting closer to working. There are (at least) the following tasks still to do (disregarding code review requests).

    • [ ] Documentation,
    • [ ] Matching DSL in config file (selecting which messages to ingest),
    • [ ] Upstreaming rust-systemd pull-request (jmesmon/rust-systemd#35),
    • [ ] cfg-config it out on platforms that don't support/run systemd,
    • [ ] properly setting the timestamp,
    • [ ] tests,
    • [ ] fixing travis-ci, so it tests (at least building) this source.

    Original message below.

    This is an untested journald source.

    It depends on an (still unreviewed) pull request to the systemd crate: jmesmon/rust-systemd#35

    There are a few things missing. It cannot

    • filter yet what messages to ingest (by unit, or anything else);
    • ingest historical messages.

    It also does not contain any documentation. Where should I put it?

    I am still not sure what configuration API is best to describe matching the log records to get (systemd-journald can AND and OR matches. See sd_journal_add_match(3)). Or whether that is even desirable. What is easily possible and should be done before merging is simple filtering by one field.

    ref #232

    opened by ibotty 11
  • Make default tags a thing

    Make default tags a thing

    It would be convenient for some users if cernan had a default set of tags. @blakebarnett is the primary contact for this feature. For users that do not use tags we'd want to add a default-tags: Bool option to cernan to avoid tags, false by default.

    Default tags we have so far:

    • hostname

    Open to suggestions.


    After further discussion, we now want default-tags to be a list, not a boolean. See https://github.com/postmates/cernan/issues/279#issuecomment-312074665 for more details.

    difficulty: easy feature request 
    opened by blt 8
  • Histograms and Configurable Bins

    Histograms and Configurable Bins

    In order to preserve information about the distribution of request times or similar data, we can't rely on percentiles: there's no correct way to later aggregate two percentiles to get another percentile. A mean of a percentiles isn't usually what you want.

    What we can do is calculate histograms for each reporting period, then report each bin of that histogram, for each flush interval, to Wavefront. From Wavefront's perspective it's just a bunch of separate metrics, but we can reassemble them and calculate a histogram for the distribution of some values for arbitrary intervals without loss of information.

    The tricky part here is communicating from the application to Cernan what the bins should be. Statsd can be configured to calculate histograms in this way, but it involves defining the bins in statsd's config file which is awkward to realize.

    So I propose an extension to the statsd protocol so applications can communicate to cernan at application start the desired histogram bins. Proposed format:

    metric.name:-inf,1,10,100,inf|h

    This configures four bins for the metric metric.name:

    • less than 1
    • [1, 10)
    • [10, 100)
    • greater than or equal to 100

    Applications can send these each time they start, and Cernan will write them to the filesystem so they don't need to be sent again if Cernan restarts.

    opened by bitglue 8
  • Avro source protocol v2

    Avro source protocol v2

    This introduces a new wire format version for the Avro source.

    The existing version 1 Avro source doesn't provide any support for extensible metadata.

    Introduced in version 2 is a set of key value pairs in the header that can be used to carry arbitrary metadata.

    The protocol allows up to 255 key value pairs, each with a key of up to 255 UTF-8 encoded bytes and a value of up to 65535 bytes in length.

    The new packet structure is described below in IETF RFC format:

     0                   1                   2                   3
     0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                             Length                            |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                            Version                            |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                            Control                            |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                                                               |
    +                               ID                              +
    |                                                               |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                                                               |
    +                            ShardBy                            +
    |                                                               |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |   #KV Pairs   |   Key Length  |    Key (up to 255 bytes)      |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |          Value Length         |   Value (up to 65535 bytes)   |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    |                          Avro N Bytes                         |
    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
    

    The source remains backwards compatible with version 1 clients.

    opened by josephglanville 7
  • Add error handling to calling lua functions

    Add error handling to calling lua functions

    Without this, errors in lua cause the C lua code to abort().

    I found it was easier to refactor the repeated process_metric/process_log/tick code than to make the change in three places.

    opened by ekimekim 7
  • Add new filter json_encode to convert loglines to json-encoded raw

    Add new filter json_encode to convert loglines to json-encoded raw

    It will also optionally try to parse the log line as JSON and include that at the top level.

    This is needed for our structured logging so we can emit json to disk and have cernan forward it to kafka.

    opened by ekimekim 7
  • Cernan needs a proper binning histogram

    Cernan needs a proper binning histogram

    As of 2f08dbc0c11f0089e6e798241045a76730c3559a the only summarization type that cernan supports is the CKMS from our quantiles. Quantile structures are great but suffer from the need to aggregate the structures before their data makes sense, as in Wavefront or Prometheus. This is not a problem for binning histograms, that is histograms computed by using a client-supplied min/max range plus number of bins. Individual cernan instances are able to choose the same aggregating behaviour without coordination.

    We should extend the native protocol to include a HISTOGRAM summary type. I believe we will need:

    • a global configuration for default min/max
    • a global configuration for default number of bins
    • extension to the native protocol to include stream-specific min/max, bins

    We may also choose to retarget statsd histogram/timers to this type. I'm unsure about this.

    This is related to #66.

    @lynchc @bitglue relevant to your interests

    enhancement difficulty: hard 
    opened by blt 7
  • Compilation failure, OS X

    Compilation failure, OS X

    Kushal Pisavadia has reported on twitter that he's having trouble compiling cernan. Twitter's too wee to get much information so I'm creating this tracking issue to solicit detailed compilation failures.

    ongoing discussion compilation failure: OS X 
    opened by blt 7
  • Scalable bins

    Scalable bins

    Back last week @bitglue and I had a conversation about the QOS changes that landed in #126. In particular, he pointed out that figuring out how many points you'd elide in a flush interval was confusing and that the random sampling could tend to lose outlier points. With #131 it became possible to represent every bin as a quantile structure because Metrics were now quantile structures and each bin is a metric.

    This commit removes the QOS concept entirely and provides a configurable bin-width per sink. That is, it's now possible to say that the console has a one-second bin width while the wavefront sink has a 10 second bin width. Each bin has full quantile introspection available to it though none of the sinks are able to consume that as yet.

    opened by blt 7
  • Bump net2 from 0.2.32 to 0.2.37

    Bump net2 from 0.2.32 to 0.2.37

    Bumps net2 from 0.2.32 to 0.2.37.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump miow from 0.2.1 to 0.2.2

    Bump miow from 0.2.1 to 0.2.2

    Bumps miow from 0.2.1 to 0.2.2.

    Commits
    • 6fd7b9c Bump version to 0.2.2
    • 550efc2 Merge branch 'fix-sockaddr-convertion-v0.2.x' into 0.2.x
    • ca8db53 Stop using from_ne_bytes to be compatible with Rust < 1.32.0
    • 3e217e3 Bump net2 dep to 0.2.36 without invalid SocketAddr convertion
    • 27b77cc Adapt to winapi 0.2
    • 2783715 Safely convert SocketAddr into raw SOCKADDR
    • f6662ef Clarify wording of license information in README.
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump protobuf from 1.7.3 to 2.6.0

    Bump protobuf from 1.7.3 to 2.6.0

    Bumps protobuf from 1.7.3 to 2.6.0.

    Changelog

    Sourced from protobuf's changelog.

    [2.6.0] - 2019-05-19

    [2.5.0] - 2019-04-15

    [2.4.2] - 2019-03-29

    [2.3.1] - 2019-03-05

    [2.3.0] - 2019-01-30

    [2.2.5] - 2019-01-20

    [2.2.4] - 2019-01-13

    [2.2.2] - 2018-12-29

    [2.2.1] - 2018-12-25

    [2.2.0] - 2018-11-17

    ... (truncated)

    Commits
    • e506e96 Bump version
    • 8235552 Implement Hash for UnknownFields
    • 71f09ae Change minimum supported Rust version to 1.26
    • f32ac1c Fuzz testing with Read trait and fix OOM on incorrect input
    • de13d72 add missing codegen rules
    • acdc94a *: support add lite-runtime customization options
    • 8b3ec6f Bump version
    • abcc3db build before regenerate
    • f9ed7a5 Regenerate code after Debug implemented for oneof enums
    • 859d7f5 Derive Debug for oneof enums
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tiny_http from 0.6.0 to 0.8.0

    Bump tiny_http from 0.6.0 to 0.8.0

    Bumps tiny_http from 0.6.0 to 0.8.0.

    Release notes

    Sourced from tiny_http's releases.

    0.6.4

    • Don't honour client TE for 1xx or 204 responses

      Where we're sending an Informational (1xx) or No Content (204) response, per the RFC we should never set a Transfer-Encoding header, regardless of what the client supplied in its TE header.

    Changelog

    Sourced from tiny_http's changelog.

    0.8.0

    0.7.0

    0.6.2

    0.6.1

    Commits
    • 8526e35 Prepare for 0.8.0 release
    • 172ca81 Merge pull request #190 from fortian/http-request-smuggle
    • cbe07c3 Add test suggested (and written) by @​rawler
    • 46a14ca Header fields can't contain whitespace.
    • 952439b add test
    • 36583ed Merge remote-tracking branch 'upstream/master' into http-request-smuggle
    • ffbde0d Merge pull request #192 from inrustwetrust/flush-errors
    • 476caa2 Filter out the same socket-closing errors on flush as on write
    • 623b873 Fix RUSTSEC-2020-0031
    • 4770db9 Merge pull request #186 from EstebanBorai/feat/getters-for-response
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump libflate from 0.1.14 to 0.1.27

    Bump libflate from 0.1.14 to 0.1.27

    Bumps libflate from 0.1.14 to 0.1.27.

    Commits
    • 193fd65 Bump version to v0.1.27
    • 0170b0c Merge pull request #43 from sile/apply-cargo-fix
    • de4dbee Apply cargo fix
    • 3ff53db Bump version to v0.1.26
    • ec3bd87 Apply rustfmt
    • 1b80cbd Merge pull request #42 from lukaslueg/byteorder_removed
    • ec74ff4 Remove byteorder-dependency
    • 22d1090 Merge pull request #39 from Stargateur/improve-decode-code
    • 15811cb Improve decode code of read_non_compressed_block()
    • 2efa0ab Bump version to v0.1.25
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Better auth story

    Better auth story

    From the security review bug (https://github.com/postmates/cernan/issues/461):

    None of the sources auth, the few sinks that have auth credentials are, iirc, not wired up to authenticate. IP whitelisting and/or presence in a blessed subnet was Good Enough. Obvious issues there.

    opened by randomdross 0
Releases(0.9.1)
  • 0.9.1(Feb 11, 2019)

    This release includes a new version of the Avro source that can carry arbitrary metadata out of band. Additionally rustfmt and clippy have been run over the codebase and are now ran in Travis to ensure code is kept formatted and free of lint violations, this accounts for the vast majority of changes in this release.

    Source code(tar.gz)
    Source code(zip)
  • 0.9.0(Jun 13, 2018)

    • Kinesis & Firehose sinks have been removed as they were not being actively used.

    • BufferedPayload allocations have been capped at 1MB. This prevents malicious users from running Cernan out of memory with bad requests.

    • Timestamps are now explicitly set in the Kafka sink. See - https://github.com/postmates/cernan/pull/441.

    • Migrated from travis-ci.org to travis-ci.com.

    Source code(tar.gz)
    Source code(zip)
  • 0.8.17(May 22, 2018)

    This release contains a memory-leak fix for our kafka dependency as well as a change to prometheus internal telemetry. The bulk of the diff is in automatic upgrades to our crates.

    • 12520e9 :: cernan.sinks.prometheus.aggregation.reportable as SET, not SUM (#435)
    • ed4339e :: Upgrade rdkafka to 0.11.4 (rustrdkafka to 0.16.0) (#436)
    Source code(tar.gz)
    Source code(zip)
  • 0.8.16(Mar 7, 2018)

    This release includes improvements to the programmable_filter's error handling, enabling of the logging subsystem before configuration file parsing happens and quieting of the kafka sink when reporting hangup errors.

    • e78bf80 :: Add a context that ignores "Receive failed: Disconnected" error logs
    • 8b171f7 :: Remove extra lazy_statics
    • 408722d :: Add error handling to calling lua functions
    • 604d7d7 :: Enable the logging subsystem before config parsing
    Source code(tar.gz)
    Source code(zip)
  • 0.8.15(Mar 1, 2018)

    This release corrects an issue with the Elasticsearch sink losing tags. We expect memory efficiency of the logging subsystem will also be improved. The json_encode filter is now hooked up and the kafka sink has seen its error handling improved.

    • 29a6db5 :: Promote MessageTimedOut to an error that will be retried
    • c9c6f01 :: json_encode filter configuration
    • 1a8fa81 :: Fix the lack of tags in ES sink, introduced in 0.8.13
    Source code(tar.gz)
    Source code(zip)
  • 0.8.14(Feb 28, 2018)

    This release contains a host of small fixes to cernan. The wavefront sink live-stall is, we believe, addressed. We also no longer emit Elasticsearch debug messages at error level, which turned out to cause a feedback cycle for some users.

    • a14146e :: Remove redundant extern
    • 68a63bb :: Emit debug, not error, in ES sink
    • ac3d847 :: Address one possible WF sink live-stall cause
    Source code(tar.gz)
    Source code(zip)
  • 0.8.13(Feb 23, 2018)

    This release incorporates significant improvements to cernan and is a recommended upgrade. In particular, cernan now uses hopper 0.4. This update of hopper is significantly more efficient, being only about 3x slower than built-in MPSC while maintaining resource quotas. These quotas are now exposed as configuration to cernan. An experimental kafka sink is also added but care should be taken with its use. Mike Lang has been landing improvements to the programmable filters and it is hoped they are now more useful.

    Cernan's baseline acceptance test is now 272k statsd PPS, held in 10Mb. This is up from 45k statsd PPS at 150Mb.

    • dcbeb7d :: Reduce high PPS memory consumption
    • 4d2d118 :: Update hopper to 0.4
    • d10d195 :: Added json_encode filter
    • ecc2fdf :: Kafka sink introduced
    Source code(tar.gz)
    Source code(zip)
  • 0.8.12(Jan 31, 2018)

    This release contains a bug fix for 0.8.11, which double-counted its internal telemetry. Anyone using 0.8.11 and the internal source is recommended to update. Elsewise, you're more or less blind.

    • dac169c :: Don't double-count internal telemetry
    Source code(tar.gz)
    Source code(zip)
  • 0.8.11(Jan 30, 2018)

    This release switches our internal telemetry from a Set aggregation to Sum, removing the reset to zero between flushes. In release mode cernan will wrap-around when the counters are fully saturated. Don't saturate your counters in debug mode. We also correct telemetry in the kinesis sink.

    • c6abe6b :: Switch internal telemetry to Sum from Set
    • f50a892 :: Fixes Kinesis Internal Telem. on Batch Success
    Source code(tar.gz)
    Source code(zip)
  • 0.8.10(Jan 22, 2018)

    This release improves warnings around truncation of bin_width when flush_interval is lower than bin_width for a given sink. The native source will no longer space explode if the far-side does something goofy, panics have been removed in the statsd/tcp sources and kinesis is now more durable to errors.

    • d0bb60b :: Warn if Wavefront bin_width > flush_interval
    • 0f4756b :: Resolve space explosion in the native source
    • 44fcabc :: Removes Undesirable Panics From statsd/tcp Sources
    • 8c3084f :: Improves Durability of Kinesis Sink.
    Source code(tar.gz)
    Source code(zip)
  • 0.8.9(Jan 17, 2018)

    This release contains a small bug fix for the wavefront sink. In particular, if you specify a non-default bin_width for that sink the configuration value will not persist after the first wavefront flush.

    • b7f9112 :: Correct bin_width goof in Wavefront sink
    Source code(tar.gz)
    Source code(zip)
  • 0.8.8(Jan 17, 2018)

    This is the first release in a good long while that will appear on crates. This has been done by moving away from a github-only fork of lua to 'mond'. We've also further improved the async IO situation in cernan, improving the buffering of payloads in the presence of WouldBlock.

    • 8f2520b :: Use blt/mond, not blt/rust-lua53
    • 0671839 :: Statsd Source Reads All Datagrams until WouldBlock
    • 3470b96 :: Fixes Nonblocking TcpStream Operations
    Source code(tar.gz)
    Source code(zip)
  • 0.8.7(Jan 10, 2018)

    This release introduces @pulltab's at-least-once updates, meaning that cernan is now able to shut itself down safely when you send it SIGINT or SIGTERM. A handful of experimental sinks and sources have also been added, documentation to appear in the wiki. This commit integrates mio at a deep level into cernan, reducing the total number of OS threads that cernan requires in the common case.

    • 6313149 :: Introduce graceful shutdown
    • a4da52d :: At Least Once Delivery on Graceful Shutdown
    • 3d4c680 :: Correct statsd mapping crash
    • 9629b09 :: Introduction of avro source
    • 3bcd14b :: Introduce Kinesis Sink, Improve Sink Interface
    • 2dc08c4 :: Introduce Event::Raw for kinesis, else
    • 2ef4c4a :: Adds Sync Publication to Avro Source
    • 45e219e :: File Source now tracks truncated files well
    Source code(tar.gz)
    Source code(zip)
  • 0.8.6(Dec 12, 2017)

    This release incorporates improvements to the postmates/quantiles library that speed up the processing of Summaries in Prometheus sink, resolving (or greatly reducing) exposition failures in that sink. An InfluxDB formatting bug has also been repaired, which is fancy.

    • 5028ed9 :: consistency: move initialization out of block (#358)
    • 6197310 :: Fix InfluxDB format bug (#363)
    • bbf98a5 :: Target quantiles 0.7 (#365)

    This release contains the first contributions of @doubleyou and @ibotty! :D

    Source code(tar.gz)
    Source code(zip)
  • 0.8.5(Dec 6, 2017)

    This release adjusts the Elasticsearch sink to better handle timeouts and clear its internal buffer. Also, we now keep a long-lived sum/count for Prometheus sink Summaries, even despite the windowing.

    • 859ca8a :: Avoid catastrophic failure when an ES request times out (#356)
    • 071328c :: Perform a linear search in ES buffer (#357)
    • 86f753c :: Introduce Telemetry override for 'count' and 'sample_sum' (#354)
    Source code(tar.gz)
    Source code(zip)
  • 0.8.4(Dec 4, 2017)

    This release fixes a problem with the prometheus sink's take on summaries. In particular, we were destroying the wrong side of the window, meaning all new values were (eventually) being dumped. This bug caught thanks to @dparton.

    • 23688f6 :: Correct prometheus window summary truncation (#352)
    Source code(tar.gz)
    Source code(zip)
  • 0.8.3(Dec 4, 2017)

    This release bundles some small housekeeping -- updating of deps -- with a fix to the Elasticsearch sink. It turns out that in the case of a bulk update succeeding ES will still signal that a few lines have failed. We used to dump those lines. Now, we store them and re-attempt to submit them up to some configurable limit.

    • 3bc5a25 :: Remove ES loglines when they've signaled as OK (#349)
    • 1d32c5e :: Update dependencies via 'cargo update'
    Source code(tar.gz)
    Source code(zip)
  • 0.8.2(Dec 4, 2017)

    This commit integrates optional point shedding for telemetry in the Wavefront sink. This is useful to shed telemetry that has gone past the Wavefront backfill cutoff.

    • c824f1c :: Allow wavefront sink to shed points (#347)
    Source code(tar.gz)
    Source code(zip)
  • 0.8.1(Nov 29, 2017)

    This release improves the performance of the Prometheus sink when the aggregation server is either slow to hang-up or there are many servers calling in. Previously an exclusive lock was held over the sink aggregator, a real problem when you're trying to ingest and report frequently.

    Source code(tar.gz)
    Source code(zip)
  • 0.8.0(Nov 27, 2017)

    This release inaugurates the work of making cernan an at-once delivery kind of system. This is John K's main show. We've also included the feedback of end-users into the prometheus sink, making it behave more like what folks expect and not breaking the bank in doing so:

    • da543d39 :: Introducing chan-signal & Signal Handling (#333)
    • c038a77c :: Remove any unwrap / excepts in prometheus critical path (#338)
    • 9b274cbd :: Dramatically improve prometheus write speed, report CPU
    • 428bf5b2 :: Allow end-users to configure statsd error bounds (#341)
    • c5aaad46 :: Allow an 'enabled' flag to be set for firehose sink (#342)
    • c084e607 :: Windowing for prometheus sink's Summaries (#344)
    Source code(tar.gz)
    Source code(zip)
  • 0.7.12(Oct 16, 2017)

    This release addresses compatibility issues with Prometheus 2, improving the tag handling of the prometheus sink and gzipping all output.

    • df974bb :: Avoid fiddling with kinds when doing sanitization in Prometheus
    • 45f7474 :: Carefully format prometheus tags, gzip output
    Source code(tar.gz)
    Source code(zip)
  • 0.7.11(Oct 13, 2017)

    This release contains overflow fixes for the prometheus sink in high-cardinality environments as well as improvements to the handling of points in the event of Sink valve closure.

    • e503a89 :: Resolve Prometheus backup issue
    Source code(tar.gz)
    Source code(zip)
  • 0.7.10(Oct 13, 2017)

    This release introduces a small change of configuration to the Elasticsearch sink: it's now possible to set the '_type' of the created index.

    • cb7d3e5 :: allow users to set the index type
    Source code(tar.gz)
    Source code(zip)
  • 0.7.9(Sep 28, 2017)

    This release has two major changes: improvements to the elasticsearch sink thanks to the careful shepherding of @benley and self-telemetry updates to the elasticsearch sink. The later is driven by Postmates' desire to deprecate AWS Firehose.

    • 916841a :: elasticsearch improvements, self-telemetry
    • 9bbd951 :: prometheus sink obeys all aggregations
    Source code(tar.gz)
    Source code(zip)
  • 0.7.8(Sep 26, 2017)

    This release includes telemetry and logging improvements in the Elasticsearch sink but the primary modifications are to do with introduction of HISTOGRAM as a first-class citizen in the project. The statsd source can be configured to use HISTOGRAM and the wavefront sink can now interpret these aggregations.

    • ffbc112 :: update release build to use ekidd/rust-musl-builder
    • e033448 :: implementation of samples_sum for Telemetry
    • 8cea57a :: introduce openssl_probe for SSL cert search
    • 805a7aa :: allow histograms to be flipped on for statsd metrics
    • 728f016 :: addition of telemetry, logging in Elasticsearch sink
    • 63fafd0 :: add histogram support to wavefront sink
    Source code(tar.gz)
    Source code(zip)
  • 0.7.7(Sep 14, 2017)

    This release includes the change to allow end users to configure the size of hopper index files, fixes the bug where the sinks gets stuck in the valve closed state, and also explicitly updates Cargo.toml to point to hopper 0.3.2, which was already on stable branch in the Cargo.lock file, but wasn't present in the config.

    • 7cae683c :: avoid getting stuck on a closed sink valve
    • 2918f086 :: allow hopper maximum index size to be configured
    Source code(tar.gz)
    Source code(zip)
  • 0.7.6(Sep 13, 2017)

    This release updates the libraries we depend on. In particular, we've updated to hopper 0.3.2 and no longer will have multiple file descriptors open for the same index files. This will free update disk space on systems where one slow Sender maintains a file pointer, holding the file's contents.

    Source code(tar.gz)
    Source code(zip)
  • 0.7.5(Sep 11, 2017)

    This release contains fixes for the overlong holding of file descriptors in a couple of spots inside of cernan:

    • 07d3fad :: drop unused structures after cernan boot
    • de747e1 :: drop deleted files from polling in file source

    The first change allows hopper to drop file descriptors opened as a part of cernan startup. The second change allows the file source to drop file descriptors as soon as the files are marked deleted. Previously we assumed that this could be avoided on the expectation of applications creating a new logfile. This assumption was not valid.

    Source code(tar.gz)
    Source code(zip)
  • 0.7.4(Sep 6, 2017)

    This release two primary changes to the project:

    • b2236f0 :: Introduce reading tags from environment variables
    • f7ced09 :: Drain unbounded buffer in Internal source

    The first change is motivated by the needs of @blakebarnett for use in k8s. It is hoped this change will ease the use of cernan in that environment by allowing tag values to be read from, uh, the environment. The second change removes a potential place of unbounded growth in cernan.

    Source code(tar.gz)
    Source code(zip)
  • 0.7.3(Sep 5, 2017)

    This release corrects an issue with Wavefront padding. In particular we did not correctly disable wavefront padding in all cases. We now do.

    Source code(tar.gz)
    Source code(zip)
Owner
Postmates Inc.
Postmates Inc.
Steals browser passwords and cookies and sends to webhook.

Browser-Stealer Steals browser passwords and cookies and sends to webhook. Donating Educational Purposes Only This code is made so you can learn from

RadonCoding 3 Sep 27, 2021
Xori is an automation-ready disassembly and static analysis library for PE32, 32+ and shellcode

Xori - Custom disassembly framework Xori is an automation-ready disassembly and static analysis library that consumes shellcode or PE binaries and pro

ENDGAME 712 Nov 28, 2022
🕵️‍♀️ Find, locate, and query files for ops and security experts ⚡️⚡️⚡️

Recon Find, locate, and query files for ops and security experts Key Features • How To Use • Download • Contributing • License Key Features Query with

Rusty Ferris Club 11 Dec 16, 2022
Semi-automatic OSINT framework and package manager

sn0int sn0int (pronounced /snoɪnt/) is a semi-automatic OSINT framework and package manager. It was built for IT security professionals and bug hunter

null 1.4k Dec 31, 2022
A Comprehensive Web Fuzzer and Content Discovery Tool

rustbuster A Comprehensive Web Fuzzer and Content Discovery Tool Introduction Check the blog post: Introducing Rustbuster — A Comprehensive Web Fuzzer

Francesco Soncina 467 Dec 26, 2022
A simple menu to keep all your most used one-liners and scripts in one place

Dama Desktop Agnostic Menu Aggregate This program aims to be a hackable, easy to use menu that can be paired to lightweight window managers in order t

null 47 Jul 23, 2022
link is a command and control framework written in rust

link link is a command and control framework written in rust. Currently in alpha. Table of Contents Introduction Features Feedback Build Process Ackno

null 427 Dec 24, 2022
A simple scanner that loops through ips and checks if a minecraft server is running on port 25565

scanolotl Scanolotl is a simple scanner that loops through ips and checks if a minecraft server is running on port 25565. Scanolotl can also preform a

JustFr33z 3 Jul 28, 2022
Rust library for building and running BPF/eBPF modules

RedBPF A Rust eBPF toolchain. Overview The redbpf project is a collection of tools and libraries to build eBPF programs using Rust. It includes: redbp

foniod 1.5k Jan 1, 2023
Automated attack surface mapper and vulnerability scanner

Phaser Automated attack surface mapper and vulnerability scanner What is this? Phaser is a high-performance and automated attack surface mapper and vu

Sylvain Kerkour 74 Dec 16, 2022
unfuck is a utility and library for deobfuscating obfuscated Python 2.7 bytecode

unfuck is a utility and library for deobfuscating obfuscated Python 2.7 bytecode. It is essentially a reimplementation of the Python VM with taint tracking.

Lander Brandt 171 Dec 14, 2022
Finds matching solidity function signatures for a given 4 byte signature hash and arguments.

Finds matching solidity function signatures for a given 4 byte signature hash and arguments. Useful for finding collisions or 0x00000000 gas saving methods (though there are better techniques for saving gas on calldata)

null 73 Dec 22, 2022
A rust program to bruteforce ZIP, PDF and some popular hashes.

Veldora A program to bruteforce zips, pdfs and some popular hashes. This is basically a rust version of bruttle, but a lot faster. Installation: git c

Aquib 30 Dec 28, 2022
OpenSK is an open-source implementation for security keys written in Rust that supports both FIDO U2F and FIDO2 standards.

OpenSK This repository contains a Rust implementation of a FIDO2 authenticator. We developed OpenSK as a Tock OS application. We intend to bring a ful

Google 2.4k Jan 7, 2023
Secure and fast microVMs for serverless computing.

Our mission is to enable secure, multi-tenant, minimal-overhead execution of container and function workloads. Read more about the Firecracker Charter

firecracker-microvm 20.3k Jan 1, 2023
Detects usage of unsafe Rust in a Rust crate and its dependencies.

cargo-geiger ☢️ Looking for maintainer: https://github.com/rust-secure-code/cargo-geiger/issues/210 A program that lists statistics related to the usa

Rust Secure Code Working Group 1.1k Jan 4, 2023
Advanced Fuzzing Library - Slot your Fuzzer together in Rust! Scales across cores and machines. For Windows, Android, MacOS, Linux, no_std, ...

LibAFL, the fuzzer library. Advanced Fuzzing Library - Slot your own fuzzers together and extend their features using Rust. LibAFL is written and main

Advanced Fuzzing League ++ 1.2k Jan 6, 2023
Modular, structure-aware, and feedback-driven fuzzing engine for Rust functions

Fuzzcheck Fuzzcheck is a modular, structure-aware, and feedback-driven fuzzing engine for Rust functions. Given a function test: (T) -> bool, you can

Loïc Lecrenier 397 Jan 6, 2023
A fast Rust-based safe and thead-friendly grammar-based fuzz generator

Intro fzero is a grammar-based fuzzer that generates a Rust application inspired by the paper "Building Fast Fuzzers" by Rahul Gopinath and Andreas Ze

null 203 Nov 9, 2022