Experimental one-shot benchmarking/profiling harness for Rust

Related tags

Profiling iai
Overview

Iai

Experimental One-shot Benchmark Framework in Rust
Continuous integration

Iai is an experimental benchmarking harness that uses Cachegrind to perform extremely precise single-shot measurements of Rust code.

Table of Contents

Features

  • Precision: High-precision measurements allow you to reliably detect very small optimizations to your code
  • Consistency: Iai can take accurate measurements even in virtualized CI environments
  • Performance: Since Iai only executes a benchmark once, it is typically faster to run than statistical benchmarks
  • Profiling: Iai generates a Cachegrind profile of your code while benchmarking, so you can use Cachegrind-compatible tools to analyze the results in detail
  • Stable-compatible: Benchmark your code without installing nightly Rust

Quickstart

In order to use Iai, you must have Valgrind installed. This means that Iai cannot be used on platforms that are not supported by Valgrind.

To start with Iai, add the following to your Cargo.toml file:

[dev-dependencies]
iai = "0.1"

[[bench]]
name = "my_benchmark"
harness = false

Next, define a benchmark by creating a file at $PROJECT/benches/my_benchmark.rs with the following contents:

use iai::black_box;

fn fibonacci(n: u64) -> u64 {
    match n {
        0 => 1,
        1 => 1,
        n => fibonacci(n-1) + fibonacci(n-2),
    }
}

fn iai_benchmark_short() -> u64 {
    fibonacci(black_box(10))
}

fn iai_benchmark_long() -> u64 {
    fibonacci(black_box(30))
}


iai::main!(iai_benchmark_short, iai_benchmark_long);

Finally, run this benchmark with cargo bench. You should see output similar to the following:

     Running target/release/deps/test_regular_bench-8b173c29ce041afa

bench_fibonacci_short
  Instructions:                1735
  L1 Accesses:                 2364
  L2 Accesses:                    1
  RAM Accesses:                   1
  Estimated Cycles:            2404

bench_fibonacci_long
  Instructions:            26214735
  L1 Accesses:             35638623
  L2 Accesses:                    2
  RAM Accesses:                   1
  Estimated Cycles:        35638668

Goals

The primary goal of Iai is to provide a simple and precise tool for reliably detecting very small changes to the performance of code. Additionally, it should be as programmer-friendly as possible and make it easy to create reliable, useful benchmarks.

Comparison with Criterion-rs

I intend Iai to be a complement to Criterion-rs, not a competitor. The two projects measure different things in different ways and have different pros, cons, and limitations, so for most projects the best approach is to use both.

Here's an overview of the important differences:

  • Temporary Con: Right now, Iai is lacking many features of Criterion-rs, including reports and configuration of any kind.
    • The current intent is to add support to Cargo-criterion for configuring and reporting on Iai benchmarks.
  • Pro: Iai can reliably detect much smaller changes in performance than Criterion-rs can.
  • Pro: Iai can work reliably in noisy CI environments or even cloud CI providers like GitHub Actions or Travis-CI, where Criterion-rs cannot.
  • Pro: Iai also generates profile output from the benchmark without further effort.
  • Pro: Although Cachegrind adds considerable runtime overhead, running each benchmark exactly once is still usually faster than Criterion-rs' statistical measurements.
  • Mixed: Because Iai can detect such small changes, it may report performance differences from changes to the order of functions in memory and other compiler details.
  • Con: Iai's measurements merely correlate with wall-clock time (which is usually what you actually care about), where Criterion-rs measures it directly.
  • Con: Iai cannot exclude setup code from the measurements, where Criterion-rs can.
  • Con: Because Cachegrind does not measure system calls, IO time is not accurately measured.
  • Con: Because Iai runs the benchmark exactly once, it cannot measure variation in the performance such as might be caused by OS thread scheduling or hash-table randomization.
  • Limitation: Iai can only be used on platforms supported by Valgrind. Notably, this does not include Windows.

For benchmarks that run in CI (especially if you're checking for performance regressions in pull requests on cloud CI) you should use Iai. For benchmarking on Windows or other platforms that Valgrind doesn't support, you should use Criterion-rs. For other cases, I would advise using both. Iai gives more precision and scales better to larger benchmarks, while Criterion-rs allows for excluding setup time and gives you more information about the actual time your code takes and how strongly that is affected by non-determinism like threading or hash-table randomization. If you absolutely need to pick one or the other though, Iai is probably the one to go with.

Contributing

First, thank you for contributing.

One great way to contribute to Iai is to use it for your own benchmarking needs and report your experiences, file and comment on issues, etc.

Code or documentation improvements in the form of pull requests are also welcome. If you're not sure what to work on, try checking the Beginner label.

If your issues or pull requests have no response after a few days, feel free to ping me (@bheisler).

For more details, see the CONTRIBUTING.md file.

Compatibility Policy

Iai supports the last three stable minor releases of Rust. At time of writing, this means Rust 1.48 or later. Older versions may work, but are not tested or guaranteed.

Currently, the oldest version of Rust believed to work is 1.48. Future versions of Iai may break support for versions older than 3-versions-ago, and this will not be considered a breaking change. If you require Iai to work on old versions of Rust, you will need to stick to a specific patch version of Iai.

Maintenance

Iai was originally written and is maintained by Brook Heisler (@bheisler)

License

Iai is dual licensed under the Apache 2.0 license and the MIT license.

Comments
  • Is this project still in development?

    Is this project still in development?

    Hello, just wondering if this project is still active or in development?

    It seemed really promising for repeatable benchmarks in CI so it'd be good to know if it's unlikely to be developed further.

    opened by Alfred-Mountfield 2
  • `failed to allocate a guard page` on FreeBSD

    `failed to allocate a guard page` on FreeBSD

    Whenever I try to run any Rust program with Cachegrind on FreeBSD, it panics with the message thread '<unnamed>' panicked at 'failed to allocate a guard page', library/std/src/sys/unix/thread.rs:364:17. It seems that Cachegrind's tricky memory tricks interfere with Rust's tricky memory tricks. Is there a way at compile-time to disable the guard page allocation? If so, iai should use it.

    opened by asomers 1
  • Fix warning and error in example code.

    Fix warning and error in example code.

    warning: unused import: `main`
     --> benches/my_benchmark.rs:1:22
      |
    1 | use iai::{black_box, main};
      |                      ^^^^
      |
      = note: `#[warn(unused_imports)]` on by default
    
    error[E0308]: mismatched types
      --> benches/my_benchmark.rs:15:28
       |
    15 | fn iai_benchmark_long() -> u64 {
       |    ------------------      ^^^ expected `u64`, found `()`
       |    |
       |    implicitly returns `()` as its body has no tail or `return` expression
    16 |     fibonacci(black_box(30));
       |                             - help: consider removing this semicolon
    
    error: aborting due to previous error; 1 warning emitted
    
    For more information about this error, try `rustc --explain E0308`.
    
    opened by weiby3 1
  • [QUESTION] Why are L2 accesses not taken into account in estimation?

    [QUESTION] Why are L2 accesses not taken into account in estimation?

    I know the approximation comes from an article used to estimate times in Python code, IIRC empirical. What I don't understand is why that formula ignores L2 accesses. I would expect them to produce a bigger hit than L1, as they are slower. I'm asking because some code of mine produces a big (20%) increase in L2 accesses without changing RAM accesses or L1 accesses significantly as they're simply changing some small (two words) values in arguments to functions and returns from references to the pair to simply value copies. I would expect that to inform a slowdown (I expect that to be slower than the original program), but instead I see speed estimation more or less unchanged, actually with a tiny negative number.

    I haven't yet compared wall times tho.

    opened by Oppen 0
  • Unexpected Results

    Unexpected Results

    To start, I want to thank @bheisler for all of the effort you've put into criterion and iai!!

    I've been experimenting with iai and really like the notion of "one-shot" measuring for low level benchmarks. I've played round with it, but sometimes I get unexpected results. This could definitely be an error on my part, that is usually the case, but I've been unable to track it down and thus I've created this issue.

    Of note, I get very consistent results if I do multiple runs of a single configuration. But sometimes I run into problems when I change something or run a slightly different command. I then can get results that look wrong to me.

    First off, I use an Arch Linux system for development:

    $ uname -a
    Linux 3900x 6.0.12-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 08 Dec 2022 11:03:38 +0000 x86_64 GNU/Linux
    
    $ inxi -c
    CPU: 12-Core AMD Ryzen 9 3900X (-MT MCP-) speed/min/max: 2200/2200/3800 MHz Kernel: 6.0.12-arch1-1 x86_64 Up: 9h 37m 
    Mem: 6392.5/32019.1 MiB (20.0%) Storage: 465.76 GiB (136.4% used) Procs: 481 Shell: bash 5.1.16 inxi: 3.1.03 
    

    I've created exper-iai with cargo new --lib which creates a lib with an fn add and test it_works:

    $ cat -n src/lib.rs
         1  pub fn add(left: usize, right: usize) -> usize {
         2      left + right
         3  }
         4
         5  #[cfg(test)]
         6  mod tests {
         7      use super::*;
         8
         9      #[test]
        10      fn it_works() {
        11          let result = add(2, 2);
        12          assert_eq!(result, 4);
        13      }
        14  }
    

    I then added a simple fn main:

    $ cat -n src/main.rs
         1  use exper_iai::add;
         2
         3  fn main() {
         4      let r = add(3, 3);
         5      assert_eq!(r, 6);
         6      println!("{r}");
         7  }
    

    And the iai benchmark is:

    $ cat -n benches/bench_iai.rs 
         1  use iai::black_box;
         2  use exper_iai::add;
         3
         4  fn bench_iai_add() {
         5      black_box(add(2, 2));
         6  }
         7
         8  iai::main!(bench_iai_add,);
    

    I also created gen_asm.sh so I could see the generated assembler.

    $ cat -n asm/add.txt
         1  .section .text.exper_iai::add,"ax",@progbits
         2          .globl  exper_iai::add
         3          .p2align        4, 0x90
         4          .type   exper_iai::add,@function
         5  exper_iai::add:
         6
         7          .cfi_startproc
         8          lea rax, [rdi + rsi]
         9          ret
        10
        11          .size   exper_iai::add, .Lfunc_end0-exper_iai::add
    
    $ cat -n asm/bench_iai_add.txt 
         1  .section .text.bench_iai::iai_wrappers::bench_iai_add,"ax",@progbits
         2          .p2align        4, 0x90
         3          .type   bench_iai::iai_wrappers::bench_iai_add,@function
         4  bench_iai::iai_wrappers::bench_iai_add:
         5
         6          .cfi_startproc
         7          push rax
         8          .cfi_def_cfa_offset 16
         9
        10          mov edi, 2
        11          mov esi, 2
        12          call qword ptr [rip + exper_iai::add@GOTPCREL]
        13          mov qword ptr [rsp], rax
        14
        15          mov rax, qword ptr [rsp]
        16
        17          pop rax
        18          .cfi_def_cfa_offset 8
        19          ret
        20
        21          .size   bench_iai::iai_wrappers::bench_iai_add, .Lfunc_end5-bench_iai::iai_wrappers::bench_iai_add
    

    Also in Cargo.toml I added [profile.dev] and [profile.release] as well as I added rust-toolchain.toml to keep the toolchain consistent:

    $ cat -n Cargo.toml
         1  [package]
         2  name = "exper_iai"
         3  authors = [ "Wink Saville <[email protected]" ]
         4  license = "MIT OR Apache-2.0"
         5  version = "0.1.0"
         6  edition = "2021"
         7
         8  # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
         9
        10  [dev-dependencies]
        11  criterion = "0.4.0"
        12  iai = "0.1.1"
        13
        14  [[bench]]
        15  name = "bench_iai"
        16  path = "benches/bench_iai.rs"
        17  harness = false
        18
        19
        20  [features]
        21
        22  # From: https://doc.rust-lang.org/cargo/reference/profiles.html#dev
        23  [profile.dev]
        24  opt-level = 0
        25  debug = true
        26  #split-debuginfo = '...'  # Platform-specific.
        27  debug-assertions = true
        28  overflow-checks = true
        29  lto = false
        30  panic = 'unwind'
        31  incremental = true
        32  codegen-units = 256
        33  rpath = false
        34
        35  # From: https://doc.rust-lang.org/cargo/reference/profiles.html#release
        36  [profile.release]
        37  opt-level = 3
        38  debug = false
        39  #split-debuginfo = '...'  # Platform-specific.
        40  debug-assertions = false
        41  overflow-checks = false
        42  lto = false
        43  panic = 'unwind'
        44  incremental = false
        45  codegen-units = 1
        46  rpath = false
    
    $ cat -n rust-toolchain.toml 
         1  [toolchain]
         2  channel = "stable"
         3  #channel = "nightly"
    

    Running main and test work as expected:

    $ cargo run
       Compiling exper_iai v0.1.0 (/home/wink/prgs/rust/myrepos/exper-iai)
        Finished dev [unoptimized + debuginfo] target(s) in 0.33s
         Running `target/debug/exper_iai`
    6
    $ cargo test
       Compiling autocfg v1.1.0
       Compiling proc-macro2 v1.0.47
    ...
       Compiling tinytemplate v1.2.1
       Compiling criterion v0.4.0
       Compiling exper_iai v0.1.0 (/home/wink/prgs/rust/myrepos/exper-iai)
        Finished test [unoptimized + debuginfo] target(s) in 8.58s
         Running unittests src/lib.rs (target/debug/deps/exper_iai-854898c18c69642d)
    
    running 1 test
    test tests::it_works ... ok
    
    test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
    
         Running unittests src/main.rs (target/debug/deps/exper_iai-6092fd66897760dc)
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
    
       Doc-tests exper_iai
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
    

    And running cargo bench and yields a more or less expected result:

    $ cargo clean
    $ cargo bench
       Compiling autocfg v1.1.0
       Compiling proc-macro2 v1.0.47
    ...
       Compiling tinytemplate v1.2.1
       Compiling criterion v0.4.0
        Finished bench [optimized] target(s) in 20.33s
         Running unittests src/lib.rs (target/release/deps/exper_iai-e0c596df81667934)
    
    running 1 test
    test tests::it_works ... ignored
    
    test result: ok. 0 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
    
         Running unittests src/main.rs (target/release/deps/exper_iai-bbf641b3842b4eea)
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
    
         Running benches/bench_iai.rs (target/release/deps/bench_iai-e75a6910d1576500)
    bench_iai_add
      Instructions:                   8
      L1 Accesses:                   12
      L2 Accesses:                    2
      RAM Accesses:                   2
      Estimated Cycles:              92
    

    And here are two more runs of just bench_iai showing the expected consistency:

    $ cargo bench --bench bench_iai
        Finished bench [optimized] target(s) in 0.02s
         Running benches/bench_iai.rs (target/release/deps/bench_iai-e75a6910d1576500)
    bench_iai_add
      Instructions:                   8 (No change)
      L1 Accesses:                   12 (No change)
      L2 Accesses:                    2 (No change)
      RAM Accesses:                   2 (No change)
      Estimated Cycles:              92 (No change)
    
    $ cargo bench --bench bench_iai
        Finished bench [optimized] target(s) in 0.02s
         Running benches/bench_iai.rs (target/release/deps/bench_iai-e75a6910d1576500)
    bench_iai_add
      Instructions:                   8 (No change)
      L1 Accesses:                   12 (No change)
      L2 Accesses:                    2 (No change)
      RAM Accesses:                   2 (No change)
      Estimated Cycles:              92 (No change)
    

    Here is a my first unexpected result, if I change the command line, adding taskset -c 0, I wouldn't expect significantly different results, but Instructions is 0, that is unexpected:

    $ taskset -c 0 cargo bench --bench bench_iai
        Finished bench [optimized] target(s) in 0.02s
         Running benches/bench_iai.rs (target/release/deps/bench_iai-e75a6910d1576500)
    bench_iai_add
      Instructions:                   0 (-100.0000%)
      L1 Accesses:      18446744073709551615 (+153722867280912908288%)
      L2 Accesses:                    2 (No change)
      RAM Accesses:                   3 (+50.00000%)
      Estimated Cycles:             114 (+23.91304%)
    
    $ taskset -c 0 cargo bench --bench bench_iai
        Finished bench [optimized] target(s) in 0.02s
         Running benches/bench_iai.rs (target/release/deps/bench_iai-e75a6910d1576500)
    bench_iai_add
      Instructions:                   0 (No change)
      L1 Accesses:      18446744073709551615 (No change)
      L2 Accesses:                    2 (No change)
      RAM Accesses:                   3 (No change)
      Estimated Cycles:             114 (No change)
    
    $ taskset -c 0 cargo bench --bench bench_iai
        Finished bench [optimized] target(s) in 0.02s
         Running benches/bench_iai.rs (target/release/deps/bench_iai-e75a6910d1576500)
    bench_iai_add
      Instructions:                   0 (No change)
      L1 Accesses:      18446744073709551615 (No change)
      L2 Accesses:                    2 (No change)
      RAM Accesses:                   3 (No change)
      Estimated Cycles:             114 (No change)
    

    But a bigger problem is if rename bench_iai.rs to iai.rs and the bench within that file from bench_iai_add to iai_add:

    $ cat -n benches/iai.rs 
         1	use iai::black_box;
         2	use exper_iai::add;
         3	
         4	fn iai_add() {
         5	    black_box(add(2, 2));
         6	}
         7	
         8	iai::main!(iai_add,);
    

    And then I make the necessary changes to get things working again, see exper-iai branch rename-bench_iai_add-to-iai_add. In my opinion only "labels" have changed and none of the actual assembler code has changed.

    But now I get really unexpected results, I switch branches and then clean and rerun bench and now Instructions has a value of 22:

    $ git switch main
    Switched to branch 'main'
    Your branch is up to date with 'origin/main'.
    $ git switch rename-bench_iai_add-to-iai-add 
    Switched to branch 'rename-bench_iai_add-to-iai-add'
    $ cargo clean
    $ cargo bench
       Compiling autocfg v1.1.0
       Compiling proc-macro2 v1.0.47
    ...
       Compiling tinytemplate v1.2.1
       Compiling criterion v0.4.0
        Finished bench [optimized] target(s) in 20.60s
         Running unittests src/lib.rs (target/release/deps/exper_iai-e0c596df81667934)
    
    running 1 test
    test tests::it_works ... ignored
    
    test result: ok. 0 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
    
         Running unittests src/main.rs (target/release/deps/exper_iai-bbf641b3842b4eea)
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
    
         Running benches/iai.rs (target/release/deps/iai-1d6df879cc9849e1)
    iai_add
      Instructions:                  22
      L1 Accesses:                   34
      L2 Accesses:                    2
      RAM Accesses:                   2
      Estimated Cycles:             114
    
    $ cargo bench --bench iai
        Finished bench [optimized] target(s) in 0.02s
         Running benches/iai.rs (target/release/deps/iai-1d6df879cc9849e1)
    iai_add
      Instructions:                  22 (No change)
      L1 Accesses:                   34 (No change)
      L2 Accesses:                    2 (No change)
      RAM Accesses:                   2 (No change)
      Estimated Cycles:             114 (No change)
    
    $ cargo bench --bench iai
        Finished bench [optimized] target(s) in 0.02s
         Running benches/iai.rs (target/release/deps/iai-1d6df879cc9849e1)
    iai_add
      Instructions:                  22 (No change)
      L1 Accesses:                   34 (No change)
      L2 Accesses:                    2 (No change)
      RAM Accesses:                   2 (No change)
      Estimated Cycles:             114 (No change)
    
    opened by winksaville 0
  • Does iai support groups like criterion::group?

    Does iai support groups like criterion::group?

    It seems to me that currently we can only put benchmarks into one Rust file when using iai? It there any way to separate benchmarks into submodules like what we can do using criterion?

    opened by dclong 0
  • Support `#[cfg(...)]` in `iai::main!`

    Support `#[cfg(...)]` in `iai::main!`

    My crate has optional features that I'd like to be able to exclude from the iai run, but the iai::main! macro does not support attributes on each entry, meaning that the best I can do is cfg over the entire macro invocation. This becomes unwieldy fast as the number of possible permutations increases.

    https://github.com/LoganDark/stackblur-iter/commit/b9843dce59781d2601b8f73301d21c8e8f33a733#diff-edcd762950a4c63a41c0121bf75b104e97b31a2e17652edf0361f0079d0ce6c2R46-R86

    opened by LoganDark 0
  • Use Callgrind instead of Cachegrind

    Use Callgrind instead of Cachegrind

    Quick draft, willing to put the work in to making this PR prettier if this transition desired, @bheisler?

    Using Callgrind makes the output even more stable, since we no longer need to do an initial calibration run, so any setup that the OS linker has to perform is never included in the output. This is especially important when using valgrind on macOS (see https://github.com/bheisler/iai/issues/25#issuecomment-1029462079), since the linker does more work at runtime there.

    See Callgrind docs for more info.

    Could also be part of fixing https://github.com/bheisler/iai/issues/7, https://github.com/bheisler/iai/issues/20 and https://github.com/bheisler/iai/issues/23.

    opened by madsmtm 1
Owner
Brook Heisler
Brook Heisler
Statistics-driven benchmarking library for Rust

Criterion.rs Statistics-driven Microbenchmarking in Rust Getting Started | User Guide | Master API Docs | Released API Docs | Changelog | | Criterion.

Brook Heisler 3.1k Jan 8, 2023
A command-line benchmarking tool

hyperfine 中文 A command-line benchmarking tool. Demo: Benchmarking fd and find: Features Statistical analysis across multiple runs. Support for arbitra

David Peter 14.1k Jan 6, 2023
A unix "time" like benchmarking tool on steroids

benchie Usage Binary Once Rust is installed (see step 1 in "Toolchain Setup"), you can easily install the latest version of benchie with: $ cargo inst

benchie 3 May 6, 2022
A Rust implementation of the PCP instrumentation API

hornet hornet is a Performance Co-Pilot (PCP) Memory Mapped Values (MMV) instrumentation library written in Rust. Contents What is PCP MMV instrumenta

Performance Co-Pilot 33 Sep 15, 2022
A stopwatch library for Rust. Used to time things.

rust-stopwatch This is a simple module used to time things in Rust. Usage To use, add the following line to Cargo.toml under [dependencies]: stopwatch

Chucky Ellison 77 Dec 11, 2022
Benchmark for Rust and humans

bma-benchmark Benchmark for Rust and humans What is this for I like testing different libraries, crates and algorithms. I do benchmarks on prototypes

Altertech 11 Jan 17, 2022
Rust wrapper for COCO benchmark functions.

Coco Rust bindings for the COCO Numerical Black-Box Optimization Benchmarking Framework. See https://github.com/numbbo/coco and https://numbbo.github.

Leopold Luley 1 Nov 15, 2022
A http server benchmark tool written in rust 🦀

rsb - rust benchmark rsb is a http server benchmark tool written in rust. The development of this tool is mainly inspired by the bombardier project, a

Michael 45 Apr 10, 2023
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
Annotation to easily define ad-hoc / one-shot extension traits

Annotation to easily define ad-hoc / one-shot extension traits

Daniel Henry-Mantilla 2 Apr 19, 2022
Loads environment variables into your structs in one shot.

econf Loads environment variables into your structs in one shot. econf allows to override struct fields with environment variables easily. This is use

Yushi OMOTE 6 Jul 14, 2022
a test harness for embedded devices

defmt-test a test harness for embedded devices This crate has been moved to the defmt repository Support defmt-test is part of the Knurling project, F

Knurling 8 Aug 27, 2022
Harness the power of signify(1) to sign arbitrary git objects

git-signify A tool to sign arbitrary objects in a git repository. Generating keys Signing keys can be generated with signify, from the OpenBSD project

Tiago Carvalho 3 Jul 27, 2023
An intrusive flamegraph profiling tool for rust.

FLAME A cool flamegraph library for rust Flamegraphs are a great way to view profiling information. At a glance, they give you information about how m

null 631 Jan 3, 2023
A vim profiling tool

vim-profiler ?? vim-profiler is a wrapper around the (n)vim --startuptime command, written in Rust. The binary is called vp and has only been tested o

Liam 35 Dec 14, 2022
Cloud-Based Microservice Performance Profiling Tool

Revelio Systems Revelio Systems is a student startup sponsored by UT Austin's Inventors Program in partnership with Trend Micro. Team: Tejas Saboo, So

Tejas Saboo 1 Feb 24, 2022
A tool and library to losslessly join multiple .mp4 files shot with same camera and settings

mp4-merge A tool and library to losslessly join multiple .mp4 files shot with same camera and settings. This is useful to merge multiple files that ar

Gyroflow 7 Jan 2, 2023
Rust Programming Fundamentals - one course to rule them all, one course to find them...

Ultimate Rust Crash Course This is the companion repository for the Ultimate Rust Crash Course published online, presented live at O'Reilly virtual ev

Nathan Stocks 1.3k Jan 8, 2023
A simple menu to keep all your most used one-liners and scripts in one place

Dama Desktop Agnostic Menu Aggregate This program aims to be a hackable, easy to use menu that can be paired to lightweight window managers in order t

null 47 Jul 23, 2022
A super simple /sbin/init for Linux which allows running one and only one program

Summary High-performance /sbin/init program for Linux This is designed to do literally nothing but accept binaries over the network and run them as a

null 19 Dec 4, 2023