Easily add metrics to your system -- and actually understand them using automatically customized Prometheus queries

Overview

Autometrics 📈

Documentation Crates.io Discord Shield

Autometrics is a macro that makes it trivial to add useful metrics to any function in your codebase.

Easily understand and debug your production system using automatically generated queries. Autometrics adds links to Prometheus charts directly into each function's doc comments.

(Coming Soon!) Autometrics will also generate dashboards (#15) and alerts (#16) from simple annotations in your code. Implementations in other programming languages are also in the works!

1️⃣ Add #[autometrics] to any function or impl block

#[autometrics]
async fn create_user(Json(payload): Json<CreateUser>) -> Result<Json<User>, ApiError> {
  // ...
}

#[autometrics]
impl Database {
  async fn save_user(&self, user: User) -> Result<User, DbError> {
    // ...
  }
}

2️⃣ Hover over the function name to see the generated queries

VS Code Hover Example

3️⃣ Click a query link to go directly to the Prometheus chart for that function

Prometheus Chart

4️⃣ Go back to shipping features 🚀

See it in action

  1. Install prometheus locally
  2. Run the axum example:
cargo run --features="prometheus-exporter" --example axum
  1. Hover over the function names to see the generated query links (like in the image above) and try clicking on them to go straight to that Prometheus chart.

Why Autometrics?

Metrics today are hard to use

Metrics are a powerful and relatively inexpensive tool for understanding your system in production.

However, they are still hard to use. Developers need to:

  • Think about what metrics to track and which metric type to use (counter, histogram... 😕 )
  • Figure out how to write PromQL or another query language to get some data 😖
  • Verify that the data returned actually answers the right question 😫

Simplifying code-level observability

Many modern observability tools promise to make life "easy for developers" by automatically instrumenting your code with an agent or eBPF. Others ingest tons of logs or traces -- and charge high fees for the processing and storage.

Most of these tools treat your system as a black box and use complex and pricey processing to build up a model of your system. This, however, means that you need to map their model onto your mental model of the system in order to navigate the mountains of data.

Autometrics takes the opposite approach. Instead of throwing away valuable context and then using compute power to recreate it, it starts inside your code. It enables you to understand your production system at one of the most fundamental levels: from the function.

Standardizing function-level metrics

Functions are one of the most fundamental building blocks of code. Why not use them as the building block for observability?

A core part of Autometrics is the simple idea of using standard metric names and a consistent scheme for tagging/labeling metrics. The three metrics currently used are: function.calls.count, function.calls.duration, and function.calls.concurrent.

Labeling metrics with useful, low-cardinality function details

The following labels are added automatically to all three of the metrics: function and module.

For the function call counter, the following labels are also added:

  • caller - (see "Tracing Lite" below)
  • result - either ok or error if the function returns a Result
  • ok / error - see the next section

Static return type labels

If the concrete Result types implement Into<&'static str>, the that string will also be added as a label value under the key ok or error.

For example, you can have the variant names of your error enum included as labels:

use strum::IntoStaticStr;

#[derive(IntoStaticStr)]
#[strum(serialize_all = "snake_case")]
pub enum MyError {
  SomethingBad(String),
  Unknown,
  ComplexType { message: String },
}

In the above example, functions that return Result<_, MyError> would have an additional label error added with the values something_bad, unknown, or complex_type.

This is more useful than tracking external errors like HTTP status codes because multiple logical errors might map to the same status code.

Autometrics only supports &'static strs as labels to avoid the footgun of attaching labels with too many possible values. The Prometheus docs explain why this is important in the following warning:

CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.

"Tracing Lite"

A slightly unusual idea baked into autometrics is that by tracking more granular metrics, you can debug some issues that we would traditionally need to turn to tracing for.

Autometrics can be added to any function in your codebase, from HTTP handlers down to database methods.

This means that if you are looking into a problem with a specific HTTP handler, you can browse through the metrics of the functions called by the misbehaving function.

Simply hover over the function names of the nested function calls in your IDE to look at their metrics. Or, you can directly open the chart of the request or error rate of all functions called by a specific function.

More to come!

Stay tuned for automatically generated dashboards, alerts, and more!

Exporting Prometheus Metrics

Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.

In your Cargo.toml file, enable the optional prometheus-exporter feature:

autometrics = { version = "*", features = ["prometheus-exporter"] }

Then, call the global_metrics_exporter function in your main function:

pub fn main() {
  let _exporter = autometrics::global_metrics_exporter();
  // ...
}

And create a route on your API (probably mounted under /metrics) that returns the following:

pub fn get_metrics() -> (StatusCode, String) {
  match autometrics::encode_global_metrics() {
    Ok(metrics) => (StatusCode::OK, metrics),
    Err(err) => (StatusCode::INTERNAL_SERVER_ERROR, format!("{:?}", err))
  }
}

Configuring

Custom Prometheus URL

By default, Autometrics creates Prometheus query links that point to http://localhost:9090.

You can configure a custom Prometheus URL using a build-time environment in your build.rs file:

// build.rs

fn main() {
  let prometheus_url = "https://your-prometheus-url.example";
  println!("cargo:rustc-env=PROMETHEUS_URL={prometheus_url}");
}

Note that when using Rust Analyzer, you may need to reload the workspace in order for URL changes to take effect.

Feature flags

  • metrics - use the metrics crate for producing metrics
  • opentelemetry (enabled by default) - use the opentelemetry crate for producing metrics
  • prometheus - use the prometheus crate for producing metrics
  • prometheus-exporter - exports a Prometheus metrics collector and exporter (compatible with any of the metrics/opentelemetry/prometheus features)
Comments
  • Feature flag to use the `metrics` crate for creating metrics

    Feature flag to use the `metrics` crate for creating metrics

    Currently, we use the opentelemetry to export metrics. If people are already using the metrics crate, it would be useful to have a feature flag that changes the macro behavior to insert function-level metrics using that crate instead.

    enhancement 
    opened by emschwartz 1
  • Support other metrics libraries

    Support other metrics libraries

    Adds support for using either the prometheus or metrics crates instead of opentelemetry for collecting metrics.

    • Refactor labels module to isolate use of opentelemetry
    • Move opentelemetry dependency behind a feature flag
    • Make metrics tracking a trait
    • Don't allow metric name to be configured
    • Support using the prometheus crate to collect metrics
    • Add support for using metrics-rs crate
    • Add instructions for configuring metrics library to README

    Resolves #12, #13

    opened by emschwartz 0
  • Support applying autometrics to an `impl` block

    Support applying autometrics to an `impl` block

    In addition to adding the #[autometrics] annotation to a single function, we can enable adding it to a whole impl block. This would make it even easier to apply to a group of functions like HTTP or database handlers.

    The macro just needs to be extended to take an ItemImpl and iterate through the methods applying the original macro.

    enhancement 
    opened by emschwartz 0
  • Make concurrent request gauge optional?

    Make concurrent request gauge optional?

    We're currently using a gauge to track the number of concurrent requests to every function. It might add more overhead than is really useful in a lot of cases.

    We could either remove the tracking of concurrent requests or make it an optional parameter for the autometrics macro: #[autometrics(track_concurrency)] (or something like that).

    Thoughts?

    question 
    opened by emschwartz 0
  • Feature flag to use `prometheus` crate for creating metrics

    Feature flag to use `prometheus` crate for creating metrics

    Currently, we use the opentelemetry to export metrics. If people are already using the prometheus crate, it would be useful to have a feature flag that changes the macro behavior to insert function-level metrics using that crate instead.

    enhancement 
    opened by emschwartz 0
  • Documentation fixes

    Documentation fixes

    All the things you think about as soon as you hit publish:

    • Add description and license to macros crate
    • Fix typo in dependency
    • Use unicode emojis
    • Remove git link from readme
    • Show prometheus exporter on docs.rs
    • Add module doc comments
    opened by emschwartz 0
  • Add `caller` label to counter

    Add `caller` label to counter

    Uses a task local to track which function called the given function, adds the caller label to the counter, and adds the doc links to the charts for the request rate and error ratio of functions called by the given function.

    opened by emschwartz 0
  • Use a separate counter for tracking errors + other labels

    Use a separate counter for tracking errors + other labels

    Histograms are relatively expensive because they keep a record of every bucket + count + sum for every metric x label combination. Therefore, it seems better to use a separate counter to keep track of the total number of function calls and errors. This way, we can attach all of the labels we want to the counter and use only the function + module names as labels on the histogram.

    opened by emschwartz 0
  • Add return types as labels when they implement Into<&'static str>

    Add return types as labels when they implement Into<&'static str>

    This makes it possible to capture one level of additional details, such as the enum variant when a function returns a Result<_, Error> where Error is an enum.

    Currently, that enum needs to implement Into<&'static str> (which can be automatically done using the strum::IntoStaticStr derive macro) in order to be used by autometrics. The purpose of this is to add additional detail while limiting the cardinality of the labels to avoid blowing up the storage space for the metrics.

    A slightly better API would be to export a Label derive macro that does this, as well as ensuring that the static str is snake_case. However, this is more complicated to implement so we can come back and implement this before publishing this crate.

    opened by emschwartz 0
  • Add support for adding exemplars?

    Add support for adding exemplars?

    Some metrics libraries, such as prometheus-client, support adding OpenMetrics exemplars to counters and histograms. If people are interested in such a feature, we could investigate adding support to the autometrics API for attaching dynamic function parameters as exemplars.

    Please 👍 if you would be interested in this.

    Suggested by SpudnikV on Reddit.

    enhancement question 
    opened by emschwartz 0
  • Include links to graphs with different time ranges?

    Include links to graphs with different time ranges?

    It might be useful to include links to the Prometheus graphs with different time ranges, like:

    question 
    opened by emschwartz 4
  • `Label` derive macro

    `Label` derive macro

    Right now, autometrics adds the return type as a label if the concrete types in a Result<T, E> implement Into<&'static str>.

    It would be better if Autometrics had its own Label derive macro that you would use with your enums. That would make it more explicit that you're opting into making that a label.

    One thing to consider: if you have a Result type, it currently uses the label ok="variant_name" or error="variant_name". If you wanted to include a label from a function parameter instead of the return value, we'd probably want the label to be something like type_name="variant_name". If we do that, should we change the behavior for Results so you have the same type_name="variant_name" label or is it helpful to have a standard error="variant_name label?

    enhancement 
    opened by emschwartz 0
  • Generate alerts / SLOs

    Generate alerts / SLOs

    Either generate a sloth or OpenSLO file, which can then be used to create alerts, or directly generate the Prometheus AlertManager alert definitions.

    I'm imagining passing parameters to the autometrics macro, such as:

    #[autometrics(objectives(success_rate = 99.9, latency_target = 0.2, latency_percentile = 99))]
    

    You would add this to specific important functions like a top-level HTTP request handler on an API.

    enhancement 
    opened by emschwartz 1
Releases(v0.2.3)
  • v0.2.3(Jan 31, 2023)

    Fix how docs.rs builds the documentation so that optional features show up

    Full Changelog: https://github.com/fiberplane/autometrics-rs/compare/v0.2.0...v0.2.3

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jan 31, 2023)

    What's Changed

    • Support applying autometrics to an Impl block by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/19
    • Support other metrics libraries by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/21
    • Make concurrency tracking optional by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/22

    Full Changelog: https://github.com/fiberplane/autometrics-rs/compare/v0.1.1...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Jan 27, 2023)

    What's Changed

    • Track concurrent requests by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/1
    • Add return types as labels when they implement Into<&'static str> by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/2
    • Use Open Telemetry metric naming convention by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/3
    • Export optional prometheus exporter by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/4
    • Fix duplicate exporter registration by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/5
    • Use a separate counter for tracking errors + other labels by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/6
    • Add caller label to counter by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/7
    • Cleanup code by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/8
    • Add example, update documentation by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/9
    • Prepare to publish by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/10
    • Documentation fixes by @emschwartz in https://github.com/fiberplane/autometrics-rs/pull/11

    Full Changelog: https://github.com/fiberplane/autometrics-rs/commits/v0.1.1

    Source code(tar.gz)
    Source code(zip)
⏱ Cross-platform Prometheus style process metrics collector of metrics crate

⏱ metrics-process This crate provides Prometheus style process metrics collector of metrics crate for Linux, macOS, and Windows. Collector code is man

Alisue 12 Dec 16, 2022
An asynchronous dumb exporter proxy for prometheus. This aggregates all the metrics and exposes as a single scrape endpoint.

A dumb light weight asynchronous exporter proxy This is a dumb lightweight asynchronous exporter proxy that will help to expose multiple application m

Dark streams 3 Dec 4, 2022
A minimal, allocation-free Prometheus/OpenMetrics metrics implementation for `no-std` and embedded Rust.

tinymetrics a minimal, allocation-free Prometheus/OpenMetrics metrics implementation for no-std and embedded projects. why should you use it? you may

Eliza Weisman 282 Apr 16, 2023
Parse, edit and merge Prometheus metrics exposition format

promerge Promerge provides minimalistic and easy to use API to parse and manipulate Prometheus metrics. A simple usecase could be collecting metrics f

Mike Taghavi 4 Aug 12, 2023
`prometheus` backend for `metrics` crate

metrics + prometheus = ❤️ API Docs | Changelog prometheus backend for metrics crate. Motivation Rust has at least two ecosystems regarding metrics col

Instrumentisto Team 2 Dec 17, 2022
🔍 Fully-featured metrics collection agent for First Tech Challenge competitions. Supports Prometheus.

Scout Scout is a fully-featured free and open source metrics collector for FTC competitions. The project is licensed under the GNU LGPLv3 license. Fea

hivemind 3 Oct 24, 2023
An asynchronous Prometheus exporter for iptables

iptables_exporter An asynchronous Prometheus exporter for iptables iptables_exporter runs iptables-save --counter and scrapes the output to build Prom

Kevin K. 21 Dec 29, 2022
A Prometheus Aggregation Gateway for FAAS applications

Gravel Gateway Gravel Gateway is a Prometheus Push Gateway for FAAS applications. In particular it allows aggregation to be controlled by the incoming

Colin Douch 85 Nov 23, 2022
A Prometheus Aggregation Gateway for FAAS applications

Gravel Gateway Gravel Gateway is a Prometheus Push Gateway for FAAS applications. In particular it allows aggregation to be controlled by the incoming

Colin Douch 85 Nov 23, 2022
A Prometheus exporter for WireGuard

wireguard_exporter An asynchronous Prometheus exporter for wireguard wireguard_exporter runs wg show [..] and scrapes the output to build Prometheus m

Kevin K. 15 Dec 29, 2022
Export statistics of Mosquitto MQTT broker (topic: $SYS) to Prometheus

Preface The Mosquitto MQTT broker provides a number of statistics on the special $SYS/# topic (see mosquitto(8)). Build requirements As a Rust program

Bobobo-bo Bo-bobo 2 Dec 15, 2022
Prometheus instrumentation service for the NGINX RTMP module.

nginx-rtmp-exporter Prometheus instrumentation service for the NGINX RTMP module. Usage nginx-rtmp-exporter [OPTIONS] --scrape-url <SCRAPE_URL> O

kaylen ✨ 2 Jul 3, 2022
A metrics collection application for Linux machines. Created for MSCS 710 Project at Marist College.

Linux-Metrics-Collector A metrics collection application for Linux machines. Created for MSCS 710 Project at Marist College. Development Environment S

Christopher Ravosa 2 May 2, 2022
🚀 10x easier, 🚀 10x cheaper, 🚀 high performance, 🚀 petabyte scale - Elasticsearch/Splunk/Datadog alternative for 🚀 (logs, metrics, traces).

?? 10x easier, ?? 10x cheaper, ?? petabyte scale - Elasticsearch/Splunk/Datadog alternative for ?? (logs, metrics, traces). ZincObserve ZincObserve is

Zinc Labs Inc. 80 Feb 22, 2023
Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication

Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication. In addition to this capability, its primary function is to export these metrics to Prometheus, thus allowing the creation of meaningful graphs and visualizations.

Kiwfy 3 Sep 21, 2023
Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication.

Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication. In addition to this capability, its primary function is to export these metrics to Prometheus, thus allowing the creation of meaningful graphs and visualizations.

Not Empty Free Software Foundation 5 Oct 24, 2023
Automatically updates your Cloudflare DNS records for specific zones. Especially useful if you have dynamic IP address

Cloudflare DNS updater What does it do? Cloudflare DNS updater updates specified dns records for specified zones effortlessly and automatically. It wa

Niko Huuskonen 8 Aug 30, 2022
Acts as an IRC server and a nostr client. Connect with your IRC client using your nostr private key as the password.

nostr-irc Acts as an IRC server and a nostr client. Connect with your IRC client using your nostr private key as the password. Experimental code, use

null 11 Dec 26, 2022
Subscribe to MQTT topics and push them to InfluxDB 1.x or v2

MQTT 2 InfluxDB Subscribe to MQTT topics and push them to InfluxDB 1.x or v2 Something like Telegraf for MQTT like it does with inputs.mqtt_consumer a

null 2 Feb 20, 2022