A small utility to compare Rust micro-benchmarks.

Overview

cargo benchcmp

A small utility for comparing micro-benchmarks produced by cargo bench. The utility takes as input two sets of micro-benchmarks (one "old" and the other "new") and shows as output a comparison between each benchmark.

Linux build status Windows build status crates.io page

Dual-licensed under MIT or the UNLICENSE.

Installation

cargo benchcmp can be installed with cargo install:

$ cargo install cargo-benchcmp

The resulting binary should then be in $HOME/.cargo/bin.

Criterion support

This tool only supports the standard benchmark output emitted by cargo bench. For Criterion support, a different tool was developed called critcmp.

Example output

Coloured example output on aho-corasick benchmarks

Usage

First, run your benchmarks and save them to a file:

$ cargo bench > control

Next, apply the changes you'd like to test out, and then run the benchmarks and save them to a file again.

$ cargo bench > variable

Finally, use cargo benchcmp to compare the benchmark results!

$ cargo benchcmp control variable
 name                                bench_output_3.txt ns/iter  bench_output_2.txt ns/iter  diff ns/iter   diff %  speedup
 ac_one_byte                         354 (28248 MB/s)            349 (28653 MB/s)                      -5   -1.41%   x 1.01
 ac_one_prefix_byte_every_match      150,581 (66 MB/s)           112,957 (88 MB/s)                -37,624  -24.99%   x 1.33
 ac_one_prefix_byte_no_match         354 (28248 MB/s)            350 (28571 MB/s)                      -4   -1.13%   x 1.01
 ac_one_prefix_byte_random           20,273 (493 MB/s)           16,096 (621 MB/s)                 -4,177  -20.60%   x 1.26
 ac_ten_bytes                        108,092 (92 MB/s)           58,588 (170 MB/s)                -49,504  -45.80%   x 1.84
 ac_ten_diff_prefix                  108,082 (92 MB/s)           58,601 (170 MB/s)                -49,481  -45.78%   x 1.84
...

If you want to compare the same benchmark run in multiple ways, reuse the names in different modules. Then your benchmark output will look like:

module1::ac_two_one_prefix_byte_random   ...
module2::ac_two_one_prefix_byte_random   ...

You can then instruct benchcmp to compare the two modules by providing the two prefixes, followed by the file containing the output:

$ cargo benchcmp module1:: module2:: benchmark-output
 name                                dense_boxed:: ns/iter  dense:: ns/iter    diff ns/iter   diff %  speedup
 ac_one_byte                         354 (28248 MB/s)       349 (28653 MB/s)             -5   -1.41%   x 1.01
 ac_one_prefix_byte_every_match      150,581 (66 MB/s)      112,957 (88 MB/s)       -37,624  -24.99%   x 1.33
 ac_one_prefix_byte_no_match         354 (28248 MB/s)       350 (28571 MB/s)             -4   -1.13%   x 1.01
 ac_one_prefix_byte_random           20,273 (493 MB/s)      16,096 (621 MB/s)        -4,177  -20.60%   x 1.26
 ac_ten_bytes                        108,092 (92 MB/s)      58,588 (170 MB/s)       -49,504  -45.80%   x 1.84
 ac_ten_diff_prefix                  108,082 (92 MB/s)      58,601 (170 MB/s)       -49,481  -45.78%   x 1.84
 ac_ten_one_prefix_byte_every_match  150,561 (66 MB/s)      112,920 (88 MB/s)       -37,641  -25.00%   x 1.33
 ac_ten_one_prefix_byte_no_match     354 (28248 MB/s)       350 (28571 MB/s)             -4   -1.13%   x 1.01
 ac_ten_one_prefix_byte_random       23,684 (422 MB/s)      19,181 (521 MB/s)        -4,503  -19.01%   x 1.23
 ac_two_bytes                        3,138 (3186 MB/s)      3,125 (3200 MB/s)           -13   -0.41%   x 1.00
 ac_two_diff_prefix                  3,138 (3186 MB/s)      3,124 (3201 MB/s)           -14   -0.45%   x 1.00
 ac_two_one_prefix_byte_every_match  150,571 (66 MB/s)      112,934 (88 MB/s)       -37,637  -25.00%   x 1.33
 ac_two_one_prefix_byte_no_match     354 (28248 MB/s)       350 (28571 MB/s)             -4   -1.13%   x 1.01
 ac_two_one_prefix_byte_random       21,009 (476 MB/s)      16,511 (605 MB/s)        -4,498  -21.41%   x 1.27

The tool supports basic filtering. For example, it's easy to see only improvements:

$ cargo benchcmp old new --improvements
 name                             full:: ns/iter     full_overlap:: ns/iter  diff ns/iter  diff %  speedup
 ac_one_byte                      367 (27247 MB/s)   367 (27247 MB/s)                   0   0.00%   x 1.00
 ac_two_one_prefix_byte_no_match  371 (26954 MB/s)   368 (27173 MB/s)                  -3  -0.81%   x 1.01
 ac_two_one_prefix_byte_random    11,530 (867 MB/s)  11,514 (868 MB/s)                -16  -0.14%   x 1.00

Or only see regressions:

$ cargo benchcmp old new --regressions
 name                                full:: ns/iter     full_overlap:: ns/iter  diff ns/iter  diff %  speedup
 ac_one_prefix_byte_every_match      27,425 (364 MB/s)  27,972 (357 MB/s)                547   1.99%   x 0.98
 ac_one_prefix_byte_no_match         367 (27247 MB/s)   373 (26809 MB/s)                   6   1.63%   x 0.98
 ac_one_prefix_byte_random           11,076 (902 MB/s)  11,243 (889 MB/s)                167   1.51%   x 0.99
 ac_ten_bytes                        25,474 (392 MB/s)  25,754 (388 MB/s)                280   1.10%   x 0.99
 ac_ten_diff_prefix                  25,466 (392 MB/s)  25,800 (387 MB/s)                334   1.31%   x 0.99
 ac_ten_one_prefix_byte_every_match  27,424 (364 MB/s)  28,046 (356 MB/s)                622   2.27%   x 0.98
 ac_ten_one_prefix_byte_no_match     367 (27247 MB/s)   369 (27100 MB/s)                   2   0.54%   x 0.99
 ac_ten_one_prefix_byte_random       13,661 (732 MB/s)  13,742 (727 MB/s)                 81   0.59%   x 0.99
 ac_two_bytes                        3,141 (3183 MB/s)  3,164 (3160 MB/s)                 23   0.73%   x 0.99
 ac_two_diff_prefix                  3,141 (3183 MB/s)  3,174 (3150 MB/s)                 33   1.05%   x 0.99
 ac_two_one_prefix_byte_every_match  27,638 (361 MB/s)  27,953 (357 MB/s)                315   1.14%   x 0.99

Many times, the difference in micro-benchmarks is just noise, so you can filter by percent difference:

$ cargo benchcmp old new --regressions --threshold 2
 name                                full:: ns/iter     full_overlap:: ns/iter  diff ns/iter  diff %  speedup
 ac_ten_one_prefix_byte_every_match  27,424 (364 MB/s)  28,046 (356 MB/s)                622   2.27%   x 0.98
Comments
  • List missing benchmarks

    List missing benchmarks

    Fixes #27

    I added this behind a command line flag, but if you think it would be appropriate, I could make it enabled by default and make the flag disable it. I left the warning in, but only printed it if the user did not enable missing benchmarks.

    Below is how it looks. I fed in some of my own benchmark results. I couldn't get the integration tests to pass on my machine, so I'm going to wait for the Travis output and see what adjustments I need to make.

    Let me know if you have any feedback!

    $ cat old
    running 8 tests
    test b01_compile_trivial   ... bench:       2,132 ns/iter (+/- 673)
    test b02_compile_large     ... bench:   1,594,916 ns/iter (+/- 78,139)
    test b03_compile_huge      ... bench:  51,480,362 ns/iter (+/- 1,144,024)
    test b04_compile_simple    ... bench:      31,399 ns/iter (+/- 868)
    test b05_compile_slow      ... bench:     225,000 ns/iter (+/- 5,625)
    test b06_interpret_trivial ... bench:      10,325 ns/iter (+/- 221)
    test b07_interpret_simple  ... bench:   6,740,466 ns/iter (+/- 81,711)
    test b08_interpret_slow    ... ignored
    
    test result: ok. 0 passed; 0 failed; 1 ignored; 7 measured
    
         Running target/release/deps/brainfuck-ca6ac36e58b54315
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
    
         Running target/release/deps/brainfuck-184a47abc46cc402
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
    
    $ cat new
    running 16 tests
    test b01_compile_trivial       ... bench:       2,829 ns/iter (+/- 279)
    test b01_compile_trivial_opt   ... bench:       1,746 ns/iter (+/- 55)
    test b02_compile_large         ... bench:   2,516,662 ns/iter (+/- 392,982)
    test b02_compile_large_opt     ... bench:   1,039,463 ns/iter (+/- 71,707)
    test b03_compile_huge          ... bench:  86,575,722 ns/iter (+/- 3,605,019)
    test b03_compile_huge_opt      ... bench:  15,571,740 ns/iter (+/- 1,062,383)
    test b04_compile_simple        ... bench:      44,247 ns/iter (+/- 490)
    test b04_compile_simple_opt    ... bench:      24,441 ns/iter (+/- 255)
    test b05_compile_slow          ... bench:     289,571 ns/iter (+/- 3,150)
    test b05_compile_slow_opt      ... bench:     158,586 ns/iter (+/- 2,301)
    test b06_interpret_trivial     ... bench:      10,442 ns/iter (+/- 46)
    test b06_interpret_trivial_opt ... bench:       5,634 ns/iter (+/- 19)
    test b07_interpret_simple      ... bench:   7,388,517 ns/iter (+/- 34,084)
    test b07_interpret_simple_opt  ... bench:   4,114,281 ns/iter (+/- 18,730)
    test b08_interpret_slow        ... ignored
    test b08_interpret_slow_opt    ... ignored
    
    test result: ok. 0 passed; 0 failed; 2 ignored; 14 measured
    
         Running target/release/deps/brainfuck-ca6ac36e58b54315
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
    
         Running target/release/deps/brainfuck-184a47abc46cc402
    
    running 0 tests
    
    test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured
    
    $ cargo run -- benchcmp old new --include-missing
        Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
         Running `target/debug/cargo-benchcmp benchcmp old new --include-missing`
     name                       old ns/iter  new ns/iter  diff ns/iter  diff % 
     b01_compile_trivial        2,132        2,829                 697  32.69% 
     b02_compile_large          1,594,916    2,516,662         921,746  57.79% 
     b03_compile_huge           51,480,362   86,575,722     35,095,360  68.17% 
     b04_compile_simple         31,399       44,247             12,848  40.92% 
     b05_compile_slow           225,000      289,571            64,571  28.70% 
     b06_interpret_trivial      10,325       10,442                117   1.13% 
     b07_interpret_simple       6,740,466    7,388,517         648,051   9.61% 
     b01_compile_trivial_opt    n/a          1,746                 n/a     n/a 
     b02_compile_large_opt      n/a          1,039,463             n/a     n/a 
     b03_compile_huge_opt       n/a          15,571,740            n/a     n/a 
     b04_compile_simple_opt     n/a          24,441                n/a     n/a 
     b05_compile_slow_opt       n/a          158,586               n/a     n/a 
     b06_interpret_trivial_opt  n/a          5,634                 n/a     n/a 
     b07_interpret_simple_opt   n/a          4,114,281             n/a     n/a
    

    screenshot from 2017-04-19 12-39-52

    $ cargo run -- benchcmp new old --include-missing
       Compiling cargo-benchcmp v0.1.5 (file:///home/sunjay/Documents/projects/cargo-benchcmp)
        Finished dev [unoptimized + debuginfo] target(s) in 3.9 secs
         Running `target/debug/cargo-benchcmp benchcmp new old --include-missing`
     name                       new ns/iter  old ns/iter  diff ns/iter   diff % 
     b01_compile_trivial        2,829        2,132                -697  -24.64% 
     b02_compile_large          2,516,662    1,594,916        -921,746  -36.63% 
     b03_compile_huge           86,575,722   51,480,362    -35,095,360  -40.54% 
     b04_compile_simple         44,247       31,399            -12,848  -29.04% 
     b05_compile_slow           289,571      225,000           -64,571  -22.30% 
     b06_interpret_trivial      10,442       10,325               -117   -1.12% 
     b07_interpret_simple       7,388,517    6,740,466        -648,051   -8.77% 
     b01_compile_trivial_opt    1,746        n/a                   n/a      n/a 
     b02_compile_large_opt      1,039,463    n/a                   n/a      n/a 
     b03_compile_huge_opt       15,571,740   n/a                   n/a      n/a 
     b04_compile_simple_opt     24,441       n/a                   n/a      n/a 
     b05_compile_slow_opt       158,586      n/a                   n/a      n/a 
     b06_interpret_trivial_opt  5,634        n/a                   n/a      n/a 
     b07_interpret_simple_opt   4,114,281    n/a                   n/a      n/a 
    

    screenshot from 2017-04-19 12-40-02

    opened by sunjay 15
  • Upgrade prettytable-rs to 0.8

    Upgrade prettytable-rs to 0.8

    I wasn't getting any color with TERM=tmux-256color, neither automatic nor forced always, so I think it didn't understand that terminal. I do get automatic color with prettytable-rs 0.8.

    opened by cuviper 12
  • Tests

    Tests

    Writing tests as described in #3. Going straight for the bonus points: so far I've written quickcheck tests for a number of unit test situations.

    TODO list

    Unit tests

    • [X] Test the benchmark parser, including the fact that throughput is optional.
    • [X] Test that find overlap finds the correct pairs.
    • [X] Test that find overlap reports correct missing benchmarks (for both old and new).
    • [x] Test that the name determination for the two columns works correctly when given two file paths. (This may require refactoring the names method off of the Args struct.)
    • [X] Test commafy.
    • [x] Test that the name prefix feature works. (Refactor code to get a unit test.)

    Integration tests

    • [x] Test some basic example inputs and check that it produces the expected output.
    • [x] Test that passing input on stdin works.
    opened by Apanatshka 12
  • Interest in extension of the tool?

    Interest in extension of the tool?

    I forked your project and ported it to Rust as an exercise. I don't consider that an improvement, since you now need to compile the tool before you can use it. So I didn't open a PR. Besides, it's in an orphan branch because it doesn't really relate strongly to the Python stuff.

    But then I had an idea to add to the tool and I did it to the Rust version because I'm more familiar with that code. Are you interested in pulling that code back into this repo? I prefer contributing/collaborating, even though it's a small tool.

    opened by Apanatshka 9
  • Do not color comparisons that changed less than 3%

    Do not color comparisons that changed less than 3%

    Currently, comparisons are always color coded: any improvement is shown in green and any regression is shown in red. It doesn't matter if the change is 0.78% or if it's 7.8%, the output looks the same.

    This change introduces a 3% cut-off so that comparison where the change is smaller than 3% are shown without color. The value of 3% is pretty arbitrary and should perhaps be set lower.

    The cut-off is not connected with the --threshold option, so it's possible to ask for a threshold of 1% and still see uncolored lines in the output. That may or may not be a feature :-)

    opened by mgeisler 7
  • Coloured regressions/improvements

    Coloured regressions/improvements

    As a side-effect of switching from tabwriter to prettytable-rs, we can now do coloured terminal output. It's a small change to get red lines for regressions and green lines for improvements. It almost makes the --regressions/--improvements flags superfluous :)

    I can implement this feature quite easily (I did it before but I can't find the code just now), I may be able to squeeze it into a break or do it right after work. My modem is broken so I'm currently relying on internet access at work 😐

    enhancement 
    opened by Apanatshka 7
  • Include complete example in documentation

    Include complete example in documentation

    It would be great if there were a complete example that shows how to generate the "old" and "new" files that get passed to cargo benchcmp old new.

    I feel like a dummy because none of

    $ cargo bench > old
    $ cargo bench > new
    $ cargo benchcmp old new
    

    nor

    $ cargo bench &> old
    $ cargo bench &> new
    $ cargo benchcmp old new
    

    nor

    $ cargo benchcmp ./target/release/bench-12345 ./target/release/bench-67890
    

    are working for me. (Obviously, in a non-toy example, I'd make changes between generating old and new benchmark results).

    They all spit back:

    Invalid arguments.
    
    Usage:
        cargo benchcmp [options] <old> <new>
        cargo benchcmp [options] <old> <new> <file>
        cargo benchcmp -h | --help
        cargo benchcmp --version
    

    What am I doing wrong here? I'd be happy to submit a PR updating the docs with a complete example once I can figure out how to run cargo benchcmp properly.

    opened by fitzgen 6
  • Getting blank output, not quite sure why?

    Getting blank output, not quite sure why?

    Hi there,

    Thanks for writing this cool tool :) I'm new to Rust, so please excuse any stupid questions.

    I'm trying to use benchcmp with my lib, Frunk, using the following:

    $ rustup run nightly cargo bench > benches-control && rustup run nightly cargo bench > benches-variable
    #.. then 
    $ cargo benchcmp benches-control benches-variable
    

    I understand that benchmarking on the same branch will yield zero (or close to) differences, and my intention is to use benchcmp to compare performance before and after applying a PR, so I'll switch branches in between during actual usage.

    Unfortunately the second command yields no output and I'm not sure why ? My benches are here, located in a separate directory.

    Thanks in advance for your help,

    Lloyd

    opened by lloydmeta 4
  • Colored output

    Colored output

    I just started with implementing #7. Right now it only does automatic detection and colouring, and I made the header row bold to make it more distinctive.

    Still needs the flag, so don't merge yet.

    opened by Apanatshka 4
  • Project co-ownership?

    Project co-ownership?

    Ok, I've rewritten this issue because the original had petty complaints about things I should have expected anyway.


    Are you interested in sharing project ownership with me? I have more things I'd love to contribute to the tool to make it even more useful. And I think I can learn a lot from you about Rust. I'm also willing to look into the other issues that are open right now. But I'd like to have a clear sense of the roles and the way we work together.

    opened by Apanatshka 4
  • Create junit-format xml file from benchmark comparisons.

    Create junit-format xml file from benchmark comparisons.

    I implemented --junit <path>. This will create junit-format xml from benchmark comparisons.

    JUnit XML Format: https://www.ibm.com/support/knowledgecenter/en/SSUFAU_1.0.0/com.ibm.rsar.analysis.codereview.cobol.doc/topics/cac_useresults_junit.html

    opened by glaceef 3
  • Take account of bench error

    Take account of bench error

    Thanks for this utility!

    I think it would be nice to indicate differences inside the control error somehow. Perhaps colour them orange/warning-colour instead of red when a subsequent bench is slower than the control but inside the error.

    For example a control

    test benchmark_a  ... bench:  24,000,000 ns/iter (+/- 500,000)
    test benchmark_b  ... bench:  24,000,000 ns/iter (+/- 5,000)
    

    ...and the latest bench

    test benchmark_a  ... bench:  24,150,000 ns/iter (+/- 480,000)
    test benchmark_b  ... bench:  24,150,000 ns/iter (+/- 5,500)
    

    Will produce this output:

     name         control ns/iter   latest ns/iter     diff ns/iter  diff %  speedup 
     benchmark_a  24,000,000        24,150,000             150,000   0.62%   x 0.99 // colour orange
     benchmark_b  24,000,000        24,150,000             150,000   0.62%   x 0.99 // colour red
    

    benchmark_a will be a warning as the diff is inside the control error. benchmark_b will be red as before as the diff is outside the error.

    Perhaps you could also allow an option, similar to --threshold, to ignore warnings/ diffs inside errors.

    opened by alexheretic 0
  • Windows pipes in UTF-16

    Windows pipes in UTF-16

    One windows, piping the benchmarks like cargo bench > old will save the results in UTF-16. Running cargo benchcmp will then fail with "stream did not contain valid UTF-8".

    opened by cbreeden 1
  • Pick the best of multiple runs

    Pick the best of multiple runs

    One relatively simple way to paper over variability of benchmarks (for example cpu warmup-related things) is to pick the best time of multiple runs. The tool could allow having multiple input files for both before and after.

    opened by bluss 4
  • Set up test coverage badge

    Set up test coverage badge

    I saw another crate use a badge for test coverage using a service called Coveralls. I have no experience with them, but it looks nice. Might be a good idea when #3 is resolved?

    opened by Apanatshka 5
  • N-way comparison with plots

    N-way comparison with plots

    This is the big feature I've been working on. I would link you to the branch, but my modem is broken and I forgot to bring the code/laptop to work. Which also means that I cannot currently show you pretty pictures to help make the case for this feature.

    What I can do is give a quick overview of the design decisions and effects of this feature.

    Effects

    • Generate images (a bar chart with error bars) 📊
    • Extra code to maintain 🐞
    • Dependency on gnuplot for the plotting feature 😕 (couldn't find a pure Rust lib, so I'm piping to whatever is called gnuplot on the PATH)

    Design decisions

    • Currently produces one plot per test name since different benches can have very different values. But maybe a big plot is also nice? Do we give that as an option then, or do we always generate it?
    • Where to put the plots. Right now it defaults to target/benchcmp. Should it be configurable? What about the issue where you call benchcmp from a subdirectory of your project? AFAIK we can't get the project root from an environment variable.. Do we go dirty and poll the directory structure to find the Cargo.toml file?
    • Command line interface: my original plan was to put the original stuff under a subcommand table and this new stuff under subcommand plot, but now I'm thinking perhaps it's better to provide a separate cargo-benchplot executable?
    opened by Apanatshka 3
Owner
Andrew Gallant
I love to code.
Andrew Gallant
⚙️ Workshop Publishing Utility for Garry's Mod, written in Rust & Svelte and powered by Tauri

⚙️ gmpublisher Currently in Beta development. A powerful and feature-packed Workshop publisher for Garry's Mod is finally here! Click for downloads Ar

William 484 Jan 7, 2023
🧑🏻‍⚕️ Command-line utility which poll on remote addresses in order to perform status checks periodically

ナース (Nāsu) ????‍⚕️ Command-line utility which poll on remote addresses in order to perform status checks periodically Motivation Nāsu (from Japanese ナ

Esteban Borai 13 Nov 14, 2021
A simple utility for multithreading a/synchronization

Flowync Quick Example use flowync::Flower; fn main() { let flower = Flower::<i32, String>::new(1); std::thread::spawn({ let handle =

Adia Robbie 12 Dec 5, 2022
Detects usage of unsafe Rust in a Rust crate and its dependencies.

cargo-geiger ☢️ A program that lists statistics related to the usage of unsafe Rust code in a Rust crate and all its dependencies. This cargo plugin w

Rust Secure Code Working Group 1.1k Dec 26, 2022
Powerful database anonymizer with flexible rules. Written in Rust.

[Data]nymizer Powerful database anonymizer with flexible rules. Written in Rust. Datanymizer is created & supported by Evrone. What else we develop wi

[Data]nymizer 381 Dec 26, 2022
⚡️Lightning-fast linter for .env files. Written in Rust 🦀

⚡️ Lightning-fast linter for .env files. Written in Rust ?? Dotenv-linter can check / fix / compare .env files for problems that may cause the applica

null 1.5k Jan 9, 2023
Format Rust code

rustfmt Quick start On the Stable toolchain On the Nightly toolchain Installing from source Usage Running cargo fmt Running rustfmt directly Verifying

The Rust Programming Language 4.8k Jan 7, 2023
The Rust toolchain installer

rustup: the Rust toolchain installer Master CI Build Status Windows macOS Linux Etc rustup installs The Rust Programming Language from the official re

The Rust Programming Language 5.1k Jan 8, 2023
Repository for the Rust Language Server (aka RLS)

Rust Language Server (RLS) The RLS provides a server that runs in the background, providing IDEs, editors, and other tools with information about Rust

The Rust Programming Language 3.6k Jan 7, 2023
🦀 The ultimate search extension for Rust

Rust Search Extension 简体中文 The ultimate search extension for Rust Search docs, crates, builtin attributes, official books, and error codes, etc in you

Huhu 962 Dec 30, 2022
a freeform Rust build system

tinyrick: a freeform Rust build system .---. ^ o{__ω__ o{ ^0^ -Let me out! ~~ ( // *|* \xx\) xx`|' = =

Andrew 48 Dec 16, 2022
The Curly programming language (now in Rust!)

Curly Curly is a functional programming language that focuses on iterators. Some of its main implementation features include sum types, iterators, lis

Curly Language 30 Jan 6, 2023
Some WIP payload in Rust running on M1.

m1saka Some WIP payload in Rust running on M1. Project informations The aim of this payload is to provide exploration capabilities while providing a s

Mary 10 Mar 7, 2021
A library to compile USDT probes into a Rust library

sonde sonde is a library to compile USDT probes into a Rust library, and to generate a friendly Rust idiomatic API around it. Userland Statically Defi

Wasmer 39 Oct 12, 2022
compile rust code into memes

cargo-memex Besides their size, rust binaries have a significant disadvantage: rust binaries are not memes yet. cargo-memex is a cargo subcommand that

Matthias Seitz 243 Dec 11, 2022
A neofetch alike program that shows hardware and distro information written in rust.

nyafetch A neofetch alike program that shows hardware and distro information written in rust. installing install $ make install # by default, install

null 16 Dec 15, 2022
Automated license checking for rust. cargo lichking is a Cargo subcommand that checks licensing information for dependencies.

cargo-lichking Automated license checking for rust. cargo lichking is a Cargo subcommand that checks licensing information for dependencies. Liches ar

Nemo157 120 Dec 19, 2022
Create target folder as a RAMdisk for faster Rust compilation.

cargo-ramdisk This crate is only supported for unix like systems! cargo-ramdisk creates a ramdisk at the target folder of your project for ridiculousl

PauMAVA 20 Jan 8, 2023
cargo extension that can generate BitBake recipes utilizing the classes from meta-rust

cargo-bitbake cargo bitbake is a Cargo subcommand that generates a BitBake recipe that uses meta-rust to build a Cargo based project for Yocto Install

null 60 Oct 28, 2022