Statistics-driven benchmarking library for Rust

Overview

Criterion.rs

Statistics-driven Microbenchmarking in Rust
Travis-CI | Appveyor | Crates.io

Criterion.rs helps you write fast code by detecting and measuring performance improvements or regressions, even small ones, quickly and accurately. You can optimize with confidence, knowing how each change affects the performance of your code.

Table of Contents

Features

  • Statistics: Statistical analysis detects if, and by how much, performance has changed since the last benchmark run
  • Charts: Uses gnuplot to generate detailed graphs of benchmark results
  • Stable-compatible: Benchmark your code without installing nightly Rust

Quickstart

In order to generate plots, you must have gnuplot installed. See the gnuplot website for installation instructions. See Compatibility Policy for details on the minimum supported Rust version.

To start with Criterion.rs, add the following to your Cargo.toml file:

[dev-dependencies]
criterion = "0.3"

[[bench]]
name = "my_benchmark"
harness = false

Next, define a benchmark by creating a file at $PROJECT/benches/my_benchmark.rs with the following contents:

use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn fibonacci(n: u64) -> u64 {
    match n {
        0 => 1,
        1 => 1,
        n => fibonacci(n-1) + fibonacci(n-2),
    }
}

fn criterion_benchmark(c: &mut Criterion) {
    c.bench_function("fib 20", |b| b.iter(|| fibonacci(black_box(20))));
}

criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);

Finally, run this benchmark with cargo bench. You should see output similar to the following:

     Running target/release/deps/example-423eedc43b2b3a93
fib 20                  time:   [26.029 us 26.251 us 26.505 us]
Found 11 outliers among 99 measurements (11.11%)
  6 (6.06%) high mild
  5 (5.05%) high severe

See the Getting Started guide for more details.

Goals

The primary goal of Criterion.rs is to provide a powerful and statistically rigorous tool for measuring the performance of code, preventing performance regressions and accurately measuring optimizations. Additionally, it should be as programmer-friendly as possible and make it easy to create reliable, useful benchmarks, even for programmers without an advanced background in statistics.

Contributing

First, thank you for contributing.

One great way to contribute to Criterion.rs is to use it for your own benchmarking needs and report your experiences, file and comment on issues, etc.

Code or documentation improvements in the form of pull requests are also welcome. If you're not sure what to work on, try checking the Beginner label.

If your issues or pull requests have no response after a few days, feel free to ping me (@bheisler).

For more details, see the CONTRIBUTING.md file.

Compatibility Policy

Criterion.rs supports the last three stable minor releases of Rust. At time of writing, this means Rust 1.40 or later. Older versions may work, but are not tested or guaranteed.

Currently, the oldest version of Rust believed to work is 1.39. Future versions of Criterion.rs may break support for such old versions, and this will not be considered a breaking change. If you require Criterion.rs to work on old versions of Rust, you will need to stick to a specific patch version of Criterion.rs.

Maintenance

Criterion.rs was originally created by Jorge Aparicio (@japaric) and is currently being maintained by Brook Heisler (@bheisler).

License

Criterion.rs is dual licensed under the Apache 2.0 license and the MIT license.

Related Projects

  • bencher - A port of the libtest benchmark runner to stable Rust
  • criterion - The Haskell microbenchmarking library that inspired Criterion.rs
  • cargo-benchcmp - Cargo subcommand to compare the output of two libtest or bencher benchmark runs
  • cargo-flamegraph - Cargo subcommand to profile an executable and produce a flamegraph

Criterion.rs Extensions

Comments
  • async await support

    async await support

    #363 This is closed but I think it's important to re-open it because it seems necessary to reuse an async runtime instead of always doing async_std::task::block_on in b.iter

    Enhancement Intermediate 
    opened by GopherJ 18
  • restarting development and criterion's future

    restarting development and criterion's future

    I'd like to get criterion back into a usable state, and I'd like to improve the user experience around it.

    Here's my plan so far, input is appreciated and PR are also welcomed.

    Immediate actions: criterion 0.1.0

    Basically, release criterion with its current public API, refactor its internals, further improvements shouldn't break the API.

    • [x] land new timing loops #26
    • [x] make compilable on current nightly
    • [x] refactor: use the new Instant/Duration APIs (when stable?)
    • [x] fix travis
    • [x] CI: test on mac. see #70
    • [x] CI: test on windows. see #70
    • [x] documentation: explain output/plots, add examples, how to use in a cargo project
    • [x] fix Appyevor
    • [x] release dependencies
    • [x] start making releases.

    Short term: criterion 0.1.x

    • [x] refactor: reduce usage of unstable features.
      • Throw away or put simd behind a cargo feature, it's unstable and only works with x86_64
      • criterion won't work on stable anytime soon because it depends on test::black_box
    • [x] colorize output messages
    • [x] developer documentation: explain the math
    • [x] refactor: proper error handling in internals. bubble up errors and report them instead of panicking/unwrapping
    • [x] clippy
    • [x] rustfmt

    Future: criterion 0.2

    If we are allowed breaking changes, how can we improve the user experience? Some ideas:

    • [x] simplify the public API
      • #26 adds several methods, perhaps all the similarly named methods can be replaced with a function that takes enums or some builder pattern.
    • [ ] expose errors to user instead of panicking/exiting
    • [ ] expose more internals, e.g. return an intermediate struct with the benchmark results instead of directly writing the results to disk.
    • [ ] move away from gnuplot, produce a web report with interactive plots
    • [x] better integration with cargo, output files to target directory, plots could live in target/doc

    If you used criterion in the past, I'd like to hear about your experience.

    • In general, What worked well? And, What didn't work for you?
    • How helpful/clear/confusing were the generated plots/the output message?
    • Was the performance regression detection reliable, or did you get false positives?
    • What functionality do you think is missing?
    • Any suggestion to improve the user experience?
    opened by japaric 17
  • Add example of calling source code from `src/`

    Add example of calling source code from `src/`

    The example has all code put in benches/. In practical use this will want to call functions from src/. It is non-trivial to call these functions (a simple use wont work).

    The quickstart example should show:

    • How to prefix the use statements (crate::, project_name::, etc.)
    • What the visibility of items need to be (pub or pub(crate) etc).
    • How to make it work when the project only has a main.rs.

    That is, start with src/main.rs like so:

    fn fibonacci(n: u64) -> u64 {
        match n {
            0 => 1,
            1 => 1,
            n => fibonacci(n-1) + fibonacci(n-2),
        }
    }
    
    pub fn main() {
        println!("The 1000th fibnonacci is {}", fibonacci(1000));
    }
    

    and explain what needs to be done to Criterion-bench that fibonacci function (without moving it).

    I always struggle to get this working when I want to use Criterion in a new project. It would help me if the quickstart example covers this.

    opened by recmo 16
  • error: Unrecognized option: 'save-baseline'

    error: Unrecognized option: 'save-baseline'

    Hi guys, thanks for this great library!

    This seems very simple, so it may be my mistake. I am running the following benchmark from /benches/benchmarks.rs:

    fn main(){
        
        // make Criterion listen to command line arguments
        let mut c = Criterion::default().configure_from_args();
        
        c.bench(
            "name",
            Benchmark::new("dosomething", move |b| {
                // set up things            
                b.iter(|| 
                // do thing
                )
            })
            // Limit the measurement time and the sample size
            // to make sure the benchmark finishes in a feasible amount of time.
            .measurement_time(Duration::new(100, 0)).sample_size(20)
        );
    }
    

    Running cargo bench works, and faithfully runs the above benchmark.

    I want to use the --save-baseline command line argument, however, and provide it way the docs suggest:

    cargo bench -- --save-baseline master
    

    This produces the following error:

    Finished release [optimized] target(s) in 0.13s
    Running target/release/deps/abclib-60df177b6b7c338b
    error: Unrecognized option: 'save-baseline'
    error: bench failed
    

    Initiating the benchmark using the criterion_main!(*) macros instead does not change this.

    What is going wrong?

    I am running all this on stable. Versions:

    cargo 1.28.0 (96a2c7d16 2018-07-13)
    stable-x86_64-apple-darwin
    rustc 1.28.0 (9634041f0 2018-07-30)
    
    documentation 
    opened by jjhbw 15
  • Add reporter format with bencher support

    Add reporter format with bencher support

    This allows us an option print bencher type format on CLI

    $ cargo bench -- -R bencher
       Compiling criterion v0.3.1 (/Users/pksunkara/Coding/clap-rs/criterion.rs)
       Compiling clap v3.0.0-beta.1 (/Users/pksunkara/Coding/clap-rs/clap)
        Finished bench [optimized] target(s) in 1m 15s
         Running target/release/deps/01_default-439bccc0e739a77b
    Gnuplot not found, using plotters backend
    test build_empty_app ... bench:        135 ns/iter (+/- 0)                                 
    test parse_empty_app ... bench:      1,940 ns/iter (+/- 10)  
    
    opened by pksunkara 14
  • Panic on unwrap of None

    Panic on unwrap of None

    When running the latest version from git, I get the following panic once all the benchmarks have run:

    thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/libcore/option.rs:355:21
    stack backtrace:
       0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
                 at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
       1: std::panicking::default_hook::{{closure}}
                 at src/libstd/sys_common/backtrace.rs:71
                 at src/libstd/sys_common/backtrace.rs:59
                 at src/libstd/panicking.rs:211
       2: std::panicking::rust_panic_with_hook
                 at src/libstd/panicking.rs:227
                 at src/libstd/panicking.rs:491
       3: std::panicking::continue_panic_fmt
                 at src/libstd/panicking.rs:398
       4: rust_begin_unwind
                 at src/libstd/panicking.rs:325
       5: core::panicking::panic_fmt
                 at src/libcore/panicking.rs:95
       6: core::panicking::panic
                 at src/libcore/panicking.rs:59
       7: <core::iter::Map<I, F> as core::iter::iterator::Iterator>::next
       8: <criterion::html::Html as criterion::report::Report>::final_summary
       9: <criterion::report::Reports as criterion::report::Report>::final_summary
    

    This re-occurs each time, even if I do a cargo clean.

    opened by tkaitchuck 14
  • Is there a way to run a specific benchmark instead of the whole suite?

    Is there a way to run a specific benchmark instead of the whole suite?

    I might just have missed the how-to in the documentation but trying something like:

    rustup run nightly cargo bench benches::my_benchmark::benchmark_1 does not seem to filter out benchmarks.

    If the feature does not yet exists it would be incredibly useful as, technically, optimization often focus on a single function at the time.

    opened by Proksima 14
  • New iter function and benchmarking over multiple functions

    New iter function and benchmarking over multiple functions

    I hope I remembered to add everything I needed here.

    I both added an iter function and the possibility to benchmark multiple functions. Do you think both should be included?

    Example of where I use this: https://github.com/faern/forkjoin-benchmarking/blob/master/src/main.rs#L108

    @japaric

    opened by faern 14
  • Version 0.4.0

    Version 0.4.0

    Changes:

    • The Criterion::can_plot function has been removed.
    • The Criterion::bench_function_over_inputs function has been removed.
    • The Criterion::bench_functions function has been removed.
    • The Criterion::bench function has been removed.
    • --feature csv_output is now required for csv output.
    • --feature html_reports is now required for html output.
    • Added a --discard-baseline flag.
    • rayon and plotters are optional (and default) dependencies.
    • Status messages ('warming up', 'analyzing', etc) are printed to stderr, benchmark results are printed to stdout.
    • Terminal escape codes are handled by the anes crate.
    • Accept subsecond durations as command-line options.
    • Formally support WASI (and automatically test for wasi regressions).
    • Add --quiet flag (for printing a single line per benchmark).
    • Replace serde_cbor with ciborium.
    • Upgrade to clap v3.
    • Bump regex to version 1.5.
    • Bump MSRV to 1.56.1.
    opened by lemmih 13
  • Crash with valgrind

    Crash with valgrind

    I've tried analyzing my benchmark with valgrind, and I've experienced a crash. Seems like criterion was the culprit (to be precise, running the benchmark with the standard harness made the crash go away), so I wanted to bring this to your attention. Seems to me that analyzing a benchmark with profiling tools is something people generally would want to do.

    Or maybe I've simply made a mistake? Anyways, details can be found here, if there's anything I should to to investigate or help fix it, please let me know :)

    Bug Investigate Need Information 
    opened by KillTheMule 13
  • panic 'index out of bounds'

    panic 'index out of bounds'

    I'm getting a panic which i pointing to this line: https://github.com/japaric/criterion.rs/blob/master/src/plot/mod.rs#L440

    I assume it's because some how i've got a lot of zeroes coming from somewhere that's not supposed to be zero but thought i'd let you know

         Running C:\Users\DavidHewson\Documents\Repositories\rust\la05\target\release\deps\criterion_benchmark-40c30aef1aa85bcb.exe
    invert                  time:   [0.0000 ps 0.0000 ps 0.0000 ps]
                            change: [   NaN%    NaN%    NaN%] (p = 0.00 < 0.05)
                            Change within noise threshold.
    thread 'main' panicked at 'index out of bounds: the len is 500 but the index is 18446744073709551615', C:\Users\David\.cargo\registry\src\github.com-1ecc6299db9ec823\criterion-0.2.2\src\plot\mod.rs:440:17
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    error: bench failed
    
    opened by dten 13
  • No way to create a `BenchmarkId` with a function but without a parameter

    No way to create a `BenchmarkId` with a function but without a parameter

    The problem:

    Currently, there exists new to create from a function name and parameter, and from_parameter for creating just from a parameter. IntoBenchmarkId lets strings become a BenchmarkId with just a function name, yet it isn't publicly exported.

    Why is this needed:

    Some methods, such as bench_with_input on Criterion, take a BenchmarkId instead of any IntoBenchmarkId. This could be changed to take a generic, but I think even if that happens it makes sense to add this functionality anyways.

    Desired solution:

    A from_function or similar function to create a BenchmarkId with just a function name

    opened by CraftSpider 1
  • Profile:fixed number of iteration

    Profile:fixed number of iteration

    The --profile-time option of criterion is very convenient for profiling, but in some cases you don't want to automatically scale the number of iterations, such as when you compare profile of different benchmarks executions.

    In such a case, having the same number of iterations for all benchmarks is very convenient, as it allows to easily compare profiling metrics such as instruction count (and other performance counters).

    I think that this could be achieved by simply adding a command-line parameter --profile-iter that would behave identically to --profile-time, except that it would set a fixed number of benchmark iterations instead of fixed time.

    Besides, it would also be helpful if --profile-time could print the number of iterations it ran.

    opened by cassiersg 0
  • HTML report documentation

    HTML report documentation

    I was trying to benchmark some of my code and could not get the HTML report generated. I wasn't sure why up until I checked a previous project I worked on which specified the html_reports feature.

    I feel like mentioning this in here would probably help any newcomers to the project. I could make a small PR for this, just let me know what you think of this!

    I would add something like

    "Criterion.rs can generate an HTML report displaying the results of the benchmark under target/criterion/reports/index.html. Note that for this to happen you need to include the html_reports like so criterion = { version = "0.4.0", features = [ "html_reports" ]}.`"

    opened by AntoniosBarotsis 2
  • Specifying bench targets with `cargo criterion --bench` doesn't work

    Specifying bench targets with `cargo criterion --bench` doesn't work

    Issue

    I'm unable to run specific groups of benches with the --bench option.

    Example

    mod day01 and mod day02 contain similarly defined criterion_groups.

    criterion_group!(benches, bench_part_1, bench_part_2,);
    

    Both groups are mentioned in the criterion_main

    criterion_main!(
        day01::benches,
        day02::benches,
    )
    

    Expectation: mentioning either criterion_group or module should run that bench alone.

    Actual:

    ❯ cargo criterion --bench bench::day01
    error: no bench target named `bench::day01`.
    Available bench targets:
        bench
        day01
        day02
    
    ❯ cargo criterion --bench day01
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
    

    cargo criterion correctly runs all benches. Is there a different way to specify certain benches alone should be run?

    opened by nindalf 0
  • Update docs & examples to use builtin `bench_black_box` (1.66 stable)

    Update docs & examples to use builtin `bench_black_box` (1.66 stable)

    std::hint::bench_black_box will be stable in 1.66 (2022-12-15 stable date) and I believe it may supercede the need for criterion:: black_box in some ways. It may be worth upating the docs to indicate this, and possible tradeoffs if any.

    Docs: https://doc.rust-lang.org/std/hint/fn.black_box.html

    opened by tgross35 1
  • How do a parametrized & expensive setup before benchmarking?

    How do a parametrized & expensive setup before benchmarking?

    I'm trying to benchmark some database code and found that setup the database is pretty complicated.

    I need to create a new database on disk to run the benchmark, but can't see how to do it once per benchmark without doing it inside it.

    The problem is that in the setup sections there are no parameters to see which run it is, so I can't use a pool or similar to split the db creation.

    If I set up the DB outside the bench, it gets "copied" and all the benches run against the same DB on disk.

     #[derive(Debug)]
        struct Data {
            a: i32,
            b: u64,
            c: String,
        }
    
        impl Data {
            pub fn new(a: i32) -> Self {
                let b = (a + 13153) as u64;
                Self { a, b, c: b.to_string() }
            }
        }
    
        #[derive(Copy, Clone)]
        enum Runs {
            Tiny = 100,
        }
    
        impl Runs {
            pub fn range(self) -> Range<u16> {
                let x = self as u16;
                0..x
            }
    
            pub fn data(self) -> impl Iterator<Item = Data> {
                let x = self as u16;
                (0..x).into_iter().map(|x| Data::new(x as i32))
            }
        }
    
        mod bench_sqlite {
            use super::*;
            use rusqlite::{Connection, Transaction};
    
            fn build_db() -> ResultTest<Connection> {
                let tmp_dir = TempDir::new("sqlite_test")?;
                let db = Connection::open(tmp_dir.path().join("test.db"))?;
                db.execute_batch(
                    "PRAGMA journal_mode = WAL;
                    PRAGMA synchronous = normal;",
                )?;
    
                db.execute_batch(
                    "CREATE TABLE data (
                    a INTEGER PRIMARY KEY,
                    b BIGINT NOT NULL,
                    c TEXT);",
                )?;
    
                Ok(db)
            }
    
            pub(crate) fn insert_tx_per_row(run: Runs) -> ResultTest<()> {
                let db = build_db()?; // <-- HOW AVOID THIS?
                for row in run.data() {
                    db.execute(
                        &format!("INSERT INTO data VALUES({} ,{}, '{}');", row.a, row.b, row.c),
                        (),
                    )?;
                }
                Ok(())
            }
        }
    
        fn bench_insert_tx_per_row(c: &mut Criterion) {
            let mut group = c.benchmark_group("insert row");
            let run = Runs::Tiny;
            group.throughput(Throughput::Elements(run as u64));
    
            group.bench_function(BenchmarkId::new(SQLITE, 1), |b| {
                b.iter(|| bench_sqlite::insert_tx_per_row(run))
            });
            group.bench_function(BenchmarkId::new(PG, 1), |b| {
                b.iter(|| bench_pg::insert_tx_per_row(run))
            });
    
            group.finish();
        }
    
        criterion_group!(benches, bench_insert_tx_per_row);
        criterion_main!(benches);```
    opened by mamcx 0
Owner
Brook Heisler
Brook Heisler
A command-line benchmarking tool

hyperfine 中文 A command-line benchmarking tool. Demo: Benchmarking fd and find: Features Statistical analysis across multiple runs. Support for arbitra

David Peter 14.1k Jan 6, 2023
A unix "time" like benchmarking tool on steroids

benchie Usage Binary Once Rust is installed (see step 1 in "Toolchain Setup"), you can easily install the latest version of benchie with: $ cargo inst

benchie 3 May 6, 2022
A stopwatch library for Rust. Used to time things.

rust-stopwatch This is a simple module used to time things in Rust. Usage To use, add the following line to Cargo.toml under [dependencies]: stopwatch

Chucky Ellison 77 Dec 11, 2022
A Rust implementation of the PCP instrumentation API

hornet hornet is a Performance Co-Pilot (PCP) Memory Mapped Values (MMV) instrumentation library written in Rust. Contents What is PCP MMV instrumenta

Performance Co-Pilot 33 Sep 15, 2022
An intrusive flamegraph profiling tool for rust.

FLAME A cool flamegraph library for rust Flamegraphs are a great way to view profiling information. At a glance, they give you information about how m

null 631 Jan 3, 2023
Benchmark for Rust and humans

bma-benchmark Benchmark for Rust and humans What is this for I like testing different libraries, crates and algorithms. I do benchmarks on prototypes

Altertech 11 Jan 17, 2022
Rust wrapper for COCO benchmark functions.

Coco Rust bindings for the COCO Numerical Black-Box Optimization Benchmarking Framework. See https://github.com/numbbo/coco and https://numbbo.github.

Leopold Luley 1 Nov 15, 2022
A http server benchmark tool written in rust 🦀

rsb - rust benchmark rsb is a http server benchmark tool written in rust. The development of this tool is mainly inspired by the bombardier project, a

Michael 45 Apr 10, 2023
Online-statistics is crate 🦀 for Blazingly fast, generic and serializable online statistics

Online statistics in Rust ?? for Blazingly fast, generic and serializable online statistics. Quickstart Let's compute th

Adil Zouitine 29 Dec 30, 2022
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
A tiny, super simple and portable benchmarking library.

benchmark-simple A tiny benchmarking library for Rust. Trivial to use Works pretty much everywhere, including WebAssembly (WASI, but also in-browser)

Frank Denis 3 Dec 26, 2022
Embedded Rust arithmetic, 2D/3D vector, and statistics library

Embedded-friendly (i.e. no_std) Rust math library featuring fast, safe floating point approximations for common arithmetic operations, trigonometry, 2D/3D vector types, statistical analysis, and quaternions.

Tony Arcieri 318 Dec 22, 2022
Rust Statistics and Vector Algebra Library

Rstats Usage Insert rstats = "^1" in the Cargo.toml file, under [dependencies]. Use in source files any of the following structs, as needed: use rstat

null 16 Oct 9, 2022
Statistics network traffic --- Rust Library

Language : ???? English | ???? 简体中文 netraffic Overview netraffic is a rust library that provides ability to statistics network traffic. Prerequisites

勤劳的小蜜蜂 9 Mar 29, 2023
A cross-platform library to retrieve performance statistics data.

A toolkit designed to be a foundation for applications to monitor their performance.

Lark Technologies Pte. Ltd. 155 Nov 12, 2022
Experimental one-shot benchmarking/profiling harness for Rust

Iai Experimental One-shot Benchmark Framework in Rust Getting Started | User Guide | Released API Docs | Changelog Iai is an experimental benchmarking

Brook Heisler 409 Dec 25, 2022
Benchmarking web frameworks written in rust with rewrk tool.

Web Framework Benchmarks Benchmarking web frameworks written in rust with rewrk tool.

null 103 Dec 8, 2022
Benchmarking manual implementation of memcpy in Rust

Manual memcpy Benchmark Benchmarks that compare copying data between two Vec<u8>s using std::slice::copy_from_slice and a loop that copies one byte at

Adam Bratschi-Kaye 0 Feb 2, 2022
Benchmarking C, Python, and Rust on the "sp" problem

Fast SP Various implementations of the problem in this blog post. Requirements To run this, you will need Rust Nightly and Python 3.8+ with numpy. Rus

Eddie Antonio Santos 2 Jul 13, 2023
A command-line benchmarking tool

hyperfine 中文 A command-line benchmarking tool. Demo: Benchmarking fd and find: Features Statistical analysis across multiple runs. Support for arbitra

David Peter 14.1k Jan 6, 2023