An asynchronous dumb exporter proxy for prometheus. This aggregates all the metrics and exposes as a single scrape endpoint.

Overview

A dumb light weight asynchronous exporter proxy

This is a dumb lightweight asynchronous exporter proxy that will help to expose multiple application metrics over a single port. This will be helpful if it is difficult to open multiple ports because of firewall restrictions.

Exporter proxy is capable of receiving and polling.

Exporter proxy receives data either as capnp or as json. It will not accept both of them at the same time. Check the receiver.format section in the config.toml to choose one.

If multiple applications send metrics to the exporter proxy, it is the application's responsibility to add a prefix to the metric name to prevent collission. If there is a duplicate occurrence of same metric name from two different applications, the exporter proxy will expose both occurrences.

Both the TCP and the Unix socket receiver endpoints expects the data to be in a key-value form. The capnp schema is also a basic kv structure where the key is the name of the application that is sending the metrics and the value, a string representation of metrics. If the format is chosen as json, it could be either a single multiline string representation of the entire metrics, or a json array strings. This is just a way for the exporter proxy to distinguish between the metrices send by different application. Upon receiving a new set of metrics, the old ones exposed will be overwritten.

Eg: if at first, an application sends the following metrics

"http_requests_total_foo_bar_app1{method=\"post\",code=\"200\"} 1027 1395066363000",
"http_requests_total_foo_bar_app1{method=\"post\",code=\"400\"}    3 1395066363000"

and then it only sends

"http_requests_total_foo_bar_app1{method=\"post\",code=\"200\"} 1029 1395066363000",

the exporter proxy will only have

"http_requests_total_foo_bar_app1{method=\"post\",code=\"200\"} 1029 1395066363000",

and will not remember "http_requests_total_foo_bar_app1{method=\"post\",code=\"400\"} 3 1395066363000"

Endpoints

  • a scrape endpoint /metrics this exposes all the metrics combined. As mentioned earlier, this is a dumb proxy and will not validate the prometheus exposition format. It is the application's responsibility to validate it before sending it. Most prometheus libraries supports formatting the metrics in a valid exposition format.
  • a metadata endpoint /apps this exposes a list of apps that sends metrics, the source that received the metrics and the last received time in UTC
exporter-proxy/python_test_clients  🍣 master 🐍 v3.9.2 🐏 6GiB/8GiB | 1024MiB/1024MiB
✦3 🕙 23:51:49 ⬢ [Docker] ✖  curl localhost:6555/apps
+---------------------------------------------------------------------------------------------------------------+
| Current time: 2022-05-22 23:51:52.939898700 UTC                                                               |
+---------------------------------------------------------------------------------------------------------------+
| +-------------------------------+-----------------------------------+---------------------------------------+ |
| | App Name                      | Last Update Time (UTC)            | Source                                | |
| +-------------------------------+-----------------------------------+---------------------------------------+ |
| | external_poll: 127.0.0.1:1027 | 2022-05-22 23:51:47.971181300 UTC | external_poll: 127.0.0.1:1027         | |
| +-------------------------------+-----------------------------------+---------------------------------------+ |
| | external_poll: 127.0.0.1:1025 | 2022-05-22 23:51:47.970590700 UTC | external_poll: 127.0.0.1:1025         | |
| +-------------------------------+-----------------------------------+---------------------------------------+ |
| | foo_bar_app1_metrics          | 2022-05-22 23:51:07.406709200 UTC | /var/run/exporter-proxy-receiver.sock | |
| +-------------------------------+-----------------------------------+---------------------------------------+ |
| | external_poll: 127.0.0.1:1026 | 2022-05-22 23:51:47.970724 UTC    | external_poll: 127.0.0.1:1026         | |
| +-------------------------------+-----------------------------------+---------------------------------------+ |
| | foo_bar_app0_metrics          | 2022-05-22 23:51:42.737333400 UTC | 127.0.0.1:6554                        | |
| +-------------------------------+-----------------------------------+---------------------------------------+ |
+---------------------------------------------------------------------------------------------------------------+

Test sending metrics

These tests assume the default configuration as mentioned in the config.toml. If it is modified, change accordingly. A couple of json files and a python script is included in the python_test_clients directory. json_string.json has the metric data as a single string whereas the json_array.json has the metric data as an array of strings. These are the two schemas that are supported. If the metrics isn't in any of these json format, it will not be deserialized and proxied.

run the exporter proxy as

RUST_LOG="trace" cargo run

and cd python_test_clients from another shell.

To proxy the capnp serialized test metrics, first make sure that the receiver.format is capnp and not json. Restart the exporter proxy if it was already running.

  • install the capnp library
pip install -r requirements.txt
  • run the script test_send_tcp_socket_capnp.py and curl localhost:6555/metrics to see if it is exposing the metrics in prometheus exposition format. Run curl localhost:6555/apps to see the app names that sent the metrics.

To proxy the the json serialized message, make sure that the receiver.format is json and not capnp.

  • install netcat-openbsd

  • run the test with either of the json format as

# to send it over the unix socket
cat ./json_array.json  | nc -U /var/run/exporter-proxy-receiver.sock -q 0

or

# to send it over tcp socket
cat ./json_string.json | nc -s 127.0.0.1 127.0.0.1 6554 -q 0

To test the external poll, cd to python_test_clients directory and run

python3 -m http.server 1025&
python3 -m http.server 1026&
python3 -m http.server 1027&

to run 3 instances of webservers on port 1025, 1026 and 1027 respectively.

After this, the metrics can be checked by cURLing the endpoints mentioned in the config.toml's poll_external.other_scrape_endpoints section like

curl localhost:1026/1026.html
You might also like...
`prometheus` backend for `metrics` crate
`prometheus` backend for `metrics` crate

metrics + prometheus = ❤️ API Docs | Changelog prometheus backend for metrics crate. Motivation Rust has at least two ecosystems regarding metrics col

🔍 Fully-featured metrics collection agent for First Tech Challenge competitions. Supports Prometheus.

Scout Scout is a fully-featured free and open source metrics collector for FTC competitions. The project is licensed under the GNU LGPLv3 license. Fea

A modern, simple TCP tunnel in Rust that exposes local ports to a remote server, bypassing standard NAT connection firewalls
A modern, simple TCP tunnel in Rust that exposes local ports to a remote server, bypassing standard NAT connection firewalls

bore A modern, simple TCP tunnel in Rust that exposes local ports to a remote server, bypassing standard NAT connection firewalls. That's all it does:

UDP proxy with Proxy Protocol and mmproxy support
UDP proxy with Proxy Protocol and mmproxy support

udppp UDP proxy with Proxy Protocol and mmproxy support. Features Async Support Proxy Protocol V2 SOCKET preserve client IP addresses in L7 proxies(mm

Web3-proxy: a fast caching and load balancing proxy for web3 (Ethereum or similar) JsonRPC servers.

web3-proxy Web3-proxy is a fast caching and load balancing proxy for web3 (Ethereum or similar) JsonRPC servers. Signed transactions (eth_sendRawTrans

Lightweight proxy that allows redirect HTTP(S) traffic through a proxy.

Proxyswarm Proxyswarm is a lightweight proxy that allows redirect HTTP(S) traffic through a proxy. WARNING: This app isn't recomended for download lar

A TCP proxy using HTTP - Reach SSH behind a Nginx reverse proxy

📡 TCP over HTTP 🥦 The Questions 🪃 What does it do? You can proxy TCP traffic over HTTP. A basic setup would be: [Your TCP target] --TCP-- [Exit No

A Prometheus Aggregation Gateway for FAAS applications

Gravel Gateway Gravel Gateway is a Prometheus Push Gateway for FAAS applications. In particular it allows aggregation to be controlled by the incoming

A Prometheus Aggregation Gateway for FAAS applications

Gravel Gateway Gravel Gateway is a Prometheus Push Gateway for FAAS applications. In particular it allows aggregation to be controlled by the incoming

Owner
Dark streams
Dark streams
⏱ Cross-platform Prometheus style process metrics collector of metrics crate

⏱ metrics-process This crate provides Prometheus style process metrics collector of metrics crate for Linux, macOS, and Windows. Collector code is man

Alisue 12 Dec 16, 2022
Proxy sentry request to a sentry server using a tunnel/proxy endpoint

Sentry Tunnel This is a proxy that forwards tunneled sentry requests to the real sentry server. The implementation is based on the explanation provide

Paul FLORENCE 14 Dec 20, 2022
A Prometheus exporter for WireGuard

wireguard_exporter An asynchronous Prometheus exporter for wireguard wireguard_exporter runs wg show [..] and scrapes the output to build Prometheus m

Kevin K. 15 Dec 29, 2022
Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication

Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication. In addition to this capability, its primary function is to export these metrics to Prometheus, thus allowing the creation of meaningful graphs and visualizations.

Kiwfy 3 Sep 21, 2023
Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication.

Druid Exporter plays a fundamental role as a receiver of metrics events coming from Druid clusters, adopting the HTTP format as a means of communication. In addition to this capability, its primary function is to export these metrics to Prometheus, thus allowing the creation of meaningful graphs and visualizations.

Not Empty Free Software Foundation 5 Oct 24, 2023
JSON-RPC endpoint proxy that dumps requests/responses for debugging

json_rpc_snoop How to build Ensure you have cargo installed and in your PATH (the easiest way is to visit https://rustup.rs/) make This will create t

null 10 Dec 14, 2022
Easily add metrics to your system -- and actually understand them using automatically customized Prometheus queries

Autometrics ?? ✨ Autometrics is a macro that makes it trivial to add useful metrics to any function in your codebase. Easily understand and debug your

Fiberplane 341 Feb 6, 2023
Easily add metrics to your system -- and actually understand them using automatically customized Prometheus queries

A Rust macro that makes it easy to understand the error rate, response time, and production usage of any function in your code. Jump from your IDE to

Autometrics 462 Mar 6, 2023
A minimal, allocation-free Prometheus/OpenMetrics metrics implementation for `no-std` and embedded Rust.

tinymetrics a minimal, allocation-free Prometheus/OpenMetrics metrics implementation for no-std and embedded projects. why should you use it? you may

Eliza Weisman 282 Apr 16, 2023
Parse, edit and merge Prometheus metrics exposition format

promerge Promerge provides minimalistic and easy to use API to parse and manipulate Prometheus metrics. A simple usecase could be collecting metrics f

Mike Taghavi 4 Aug 12, 2023