A micro-benchmark framework to use with cargo bench

Related tags

Profiling glassbench
Overview

Latest Version docs Chat on Miaou MIT

Glassbench is a micro-benchmark library with memory, to use with cargo bench.

Why

Run benchmarks and get a comparison with the previous execution

cargo bench

You get compact tables with the mean durations of all the tasks you defined:

tbl

(in the future iterations and total durations will be shown only in verbose mode)

Record every tests, with tags to help you compare strategies

Read the whole history of your project's benchmarks, as everything is stored in SQLite.

cargo bench -- -- --history 2

history

When trying optimization strategies, or just anything which you suspect may impact performance, you may tag a bench execution with --tag "your tag". Those tags are visible in the history.

Graph your various tests, filtering on tags

cargo bench -- my_task -- --graph 2

The viewer, with tabular and graphical views, opens in your browser:

graph

The graph can be zoomed with the mouse wheel and moved with the mouse.

You can read the precise data in a table too:

table

Everything is embedded in a standalone HTML page, there's no process running. The page can even be sent or hosted.

Read or Edit benchmark history with SQL

Using the sqlite3 command line shell, you can run your own SQL queries:

sql

Usage

The complete testable example is in /examples/lettersorter.

Add the dev-dependency

[dev-dependencies]
glassbench = "0.2"

Prepare the benchmark

Your bench file, located in /benches, must have a task defining function and a glassbench! call.

Here we define one group, with two tasks, stressing the lettersort::sort function with two kinds of inputs.

use {
    lettersorter::sort,
    glassbench::*,
};

static SMALL_NUMBERS: &[&str] = &[
    "0.123456789",
    "42",
    "-6",
    "π/2",
    "e²",
];

static BIG_NUMBERS: &[&str] = &[
    "424568",
    "45865452*44574*778141*78999",
    "same but even bigger",
    "42!",
    "infinite",
];

fn bench_number_sorting(bench: &mut Bench) {
    bench.task("small numbers", |task| {
        task.iter(|| {
            for n in SMALL_NUMBERS {
                pretend_used(sort(n));
            }
        });
    });
    bench.task("big numbers", |task| {
        task.iter(|| {
            for n in BIG_NUMBERS {
                pretend_used(sort(n));
            }
        });
    });
}

glassbench!(
    "Number Sorting",
    bench_number_sorting,
    // you can pass other task defining functions here
);

The callback you give to b.iter will be executed many times after an initial warming.

If you have some preparation to do, do it before b.iter.

To prevent your function to be optimized away by the compiler, pass the values you build to pretend_used.

The bench must be defined in Cargo.toml with harness = false:

[[bench]]
name = "sort_numbers"
harness = false

Bench command overview

The command has the following form:

cargo bench -- <optional list of benchs to run> -- <glassbench arguments>

The names of the benchs are the names of the bench files (see examples below).

The glassbench arguments let you display the history or graph the records for a specific task, specify a tag, etc.

Run all benchmarks

cargo bench

This will run all benchmarks groups (even the ones not using Glassbench) and will produce something like this after a few tries and some optimization efforts:

sort tbl

Be careful that you should not stress your system otherwise if you want to be able to compare executions. It's better to close everything else until it's done.

Run just one benchmark

Specify the id of the benchmark (taken from the bench file name) after --

cargo bench -- sort_numbers

(as our example only has one benchmark, it's useless)

You could specify several benchmarks like this:

cargo bench -- sort_numbers sort_colors sort_flowers

Benchmark with a tag

Let's assume we're trying with notable conditions, maybe a genius strategy, then we may want to have this information in the history. We can do

cargo bench -- sort_numbers -- --tag "deep learning"

Look at the history of a specific task

You refer to a task by its number in the table:

cargo bench -- sort_numbers --history 1

Graph a task over executions

Addition arguments are given after a second --. To graph a task, refer to it by its number in the table:

cargo bench -- sort_numbers -- --graph 1

This opens in your browser a graph of the durations in nanoseconds of the first (1) task of the "sort_numbers" bench.

(note: the graph is a work in progress and should be improved in the future)

Other arguments

--no-save just runs the benchmark, and compares with the previous saved execution, but doesn't save the result:

cargo bench -- -- --no-save

Read (or rewrite) the history with SQLite

History is saved in the local glassbench_v1.db sqlite3 file.

You should put its path in your vcs ignore list as measures can't be compared from one system to other ones.

To enter an interactive SQL session, do

sqlite3 glassbench_v1.db

Besides SQL queries, you might find useful .schema, which shows you the tables, and .quit.

Limits

Glassbench measures the time really taken by your functions. It's the time which matters for your users but it's extremely sensible to the load of your system and can't be compared from one computer to another one.

You must be cautious when looking at the history. Changes may be related to more than just your code efficiency: even if you didn't change the task, there may have been changes in your system load or efficiency.

Alternatives

Criterion is very similar. It produces detailed reports, and has more options than Glassbench, but doesn't have an history past the previous cargo bench (which is usually the one you most want). Glassbench tries to offer a more compact and easier to read display and encourages you to define as many tasks as your can have performance impacting kinds of inputs.

License

MIT

You might also like...
A small utility to compare Rust micro-benchmarks.
A small utility to compare Rust micro-benchmarks.

cargo benchcmp A small utility for comparing micro-benchmarks produced by cargo bench. The utility takes as input two sets of micro-benchmarks (one "o

This repo contains crates that are used to create the micro services and keep shared code in a common place.

MyEmma Helper Crates This repo contains crates that can are reused over different services. These crate are used in projects at MyEmma. But these crat

μLA: Micro Logic Analyzer for RP2040
μLA: Micro Logic Analyzer for RP2040

μLA: Micro Logic Analyzer SUMP/OLS compatible logic analyzer firmware for RP2040 based boards. Features 16 channels 100 MHz sampling rate, 1 sample pe

A cargo plugin to shrink cargo's output

cargo single-line A simple cargo plugin that shrinks the visible cargo output to a single line (okay, in the best case scenario). In principle, the pl

cargo extension for flashing embedded rust programs via dfu based on jacobrosenthals cargo-hf2

cargo-dfu This crate provides a cargo subcommand to flash ELF binaries via dfu Most STM chips will probably work with this, although you might need to

Automated license checking for rust. cargo lichking is a Cargo subcommand that checks licensing information for dependencies.

cargo-lichking Automated license checking for rust. cargo lichking is a Cargo subcommand that checks licensing information for dependencies. Liches ar

cargo-check This is a wrapper around cargo rustc

cargo-check This is a wrapper around cargo rustc -- -Zno-trans. It can be helpful for running a faster compile if you only need correctness checks. In

Cargo-eval - A cargo plugin to quickly evaluate some Rust source code.

cargo eval A cargo plugin to quickly evaluate some Rust source code. Installation $ cargo install --git https://github.com/timClicks/cargo-eval.git Us

Cargo-about - 📜 Cargo plugin to generate list of all licenses for a crate 🦀

📜 cargo-about Cargo plugin for generating a license listing for all dependencies of a crate See the book 📕 for in-depth documentation. Please Note:

cargo-lambda a Cargo subcommand to help you work with AWS Lambda

cargo-lambda cargo-lambda is a Cargo subcommand to help you work with AWS Lambda. This subcommand compiles AWS Lambda functions natively and produces

cargo-lambda is a Cargo subcommand to help you work with AWS Lambda.

cargo-lambda cargo-lambda is a Cargo subcommand to help you work with AWS Lambda. The new subcommand creates a basic Rust package from a well defined

Cargo subcommand for running cargo without dev-dependencies.

cargo-no-dev-deps Cargo subcommand for running cargo without dev-dependencies. This is an extraction of the --no-dev-deps flag of cargo-hack to be use

A cargo subcommand that extends cargo's capabilities when it comes to code generation.

cargo-px Cargo Power eXtensions Check out the announcement post to learn more about cargo-px and the problems it solves with respect to code generatio

Dead simple, memoized cargo subcommand to hoist cargo-built binaries into the current working directory, written in Rust.
Dead simple, memoized cargo subcommand to hoist cargo-built binaries into the current working directory, written in Rust.

cargo-hoist Dead simple cargo subcommand to hoist cargo-built binaries into scope. stable Install | User Docs | Crate Docs | Reference | Contributing

cargo-crev to cargo-vet code review exporter

cargo-crev to cargo-vet converter Crev and Vet are supply-chain security tools for auditing Rust/Cargo dependencies. This tool (crevette) is a helper

Benchmark over Node.js binding frameworks in Rust

Benchmark over Node.js binding frameworks in Rust

Benchmark for Rust and humans
Benchmark for Rust and humans

bma-benchmark Benchmark for Rust and humans What is this for I like testing different libraries, crates and algorithms. I do benchmarks on prototypes

Benchmark tool for comparing with other runtimes.

Monoio Benchmark TCP ping-pong(not echo) is a common benchmark for network applications. We will use 1K ping-pong to test performance of different run

It is not about Keanu Reeves but a benchmark tool.
It is not about Keanu Reeves but a benchmark tool.

benchman Features Focus on one-shot benchmark RAII-style Statistics (Average, Median, 95% and 99% percentile) Colored output Tagging Nesting Motivatio

Comments
  • fix: make the use of `crossterm` unambiguous

    fix: make the use of `crossterm` unambiguous

    I had this error while testing broot / compilingglassbench:

    error[E0659]: `crossterm` is ambiguous
     --> src/printer.rs:3:5
      |
    3 |     crossterm::tty::IsTty,
      |     ^^^^^^^^^ ambiguous name
      |
      = note: ambiguous because of multiple potential import sources
      = note: `crossterm` could refer to a crate passed with `--extern`
      = help: use `::crossterm` to refer to this crate unambiguously
    note: `crossterm` could also refer to the module imported here
     --> src/printer.rs:5:5
      |
    5 |     termimad::*,
      |     ^^^^^^^^^^^
      = help: use `self::crossterm` to refer to this module unambiguously
    
    error[E0659]: `crossterm` is ambiguous
     --> src/skin.rs:2:5
      |
    2 |     crossterm::style::{Attribute::*, Color::*},
      |     ^^^^^^^^^ ambiguous name
      |
      = note: ambiguous because of multiple potential import sources
      = note: `crossterm` could refer to a crate passed with `--extern`
      = help: use `::crossterm` to refer to this crate unambiguously
    note: `crossterm` could also refer to the module imported here
     --> src/skin.rs:4:5
      |
    4 |     termimad::*,
      |     ^^^^^^^^^^^
      = help: use `self::crossterm` to refer to this module unambiguously
    

    This PR fixes this issue.

    opened by orhun 1
  • refactor: remove deprecated usage of `chrono`

    refactor: remove deprecated usage of `chrono`

    This PR fixes the following warnings:

    warning: use of deprecated associated function `chrono::TimeZone::timestamp`: use `timestamp_opt()` instead
       --> src/db.rs:180:19
        |
    180 |         time: Utc.timestamp(row.get(1)?, 0),
        |                   ^^^^^^^^^
        |
        = note: `#[warn(deprecated)]` on by default
    
    warning: use of deprecated associated function `chrono::TimeZone::timestamp`: use `timestamp_opt()` instead
       --> src/db.rs:210:19
        |
    210 |         time: Utc.timestamp(row.get(0)?, 0),
        |                   ^^^^^^^^^
    
    warning: `glassbench` (lib) generated 2 warnings
    
    opened by orhun 3
Owner
Canop
Denys Séguret. Rust Remote Developer
Canop
Benchmark tool for comparing with other runtimes.

Monoio Benchmark TCP ping-pong(not echo) is a common benchmark for network applications. We will use 1K ping-pong to test performance of different run

null 6 Oct 10, 2022
It is not about Keanu Reeves but a benchmark tool.

benchman Features Focus on one-shot benchmark RAII-style Statistics (Average, Median, 95% and 99% percentile) Colored output Tagging Nesting Motivatio

Akira Hayakawa 4 Feb 12, 2022
Rust wrapper for COCO benchmark functions.

Coco Rust bindings for the COCO Numerical Black-Box Optimization Benchmarking Framework. See https://github.com/numbbo/coco and https://numbbo.github.

Leopold Luley 1 Nov 15, 2022
A http server benchmark tool written in rust 🦀

rsb - rust benchmark rsb is a http server benchmark tool written in rust. The development of this tool is mainly inspired by the bombardier project, a

Michael 45 Apr 10, 2023
Elton is a benchmark utility written in rust aimed to be used to benchmark HTTP calls.

Elton Elton is an HTTP Benchmark utility with options to be used within an HTTP interface. Installation Elton is currently available via Docker or by

Emil Priver 5 Sep 22, 2023
A micro crate that simplifies a bit the use of the std macro thread_local!.

with-thread-local A micro crate that simplifies a bit the use of the std macro thread_local!. extern crate regex; use with_thread_local::with_thread_

Cecile Tonglet 3 Jan 11, 2023
REST-like API micro-framework for Rust. Works with Iron.

Table of Contents What is Rustless? Usage warning Basic Usage Complex example Mounting Parameters validation and coercion Use JSON Schema Query string

Rustless 610 Jan 4, 2023
Sincere is a micro web framework for Rust(stable) based on hyper and multithreading

The project is no longer maintained! Sincere Sincere is a micro web framework for Rust(stable) based on hyper and multithreading. Style like koa. The

null 94 Oct 26, 2022
Wena is a micro-framework that provides an elegant starting point for your console application.

Wena was created by Nuno Maduro, and is a Rust Lang micro-framework that provides an elegant starting point for your console application. This project

null 251 Dec 11, 2022
A µTP (Micro/uTorrent Transport Library) library implemented in Rust

rust-utp A Micro Transport Protocol library implemented in Rust. API documentation Overview The Micro Transport Protocol is a reliable transport proto

Ricardo Martins 134 Dec 11, 2022