A command-line benchmarking tool

Overview

hyperfine

Build Status Build status Version info 中文

A command-line benchmarking tool.

Demo: Benchmarking fd and find:

hyperfine

Features

  • Statistical analysis across multiple runs.
  • Support for arbitrary shell commands.
  • Constant feedback about the benchmark progress and current estimates.
  • Warmup runs can be executed before the actual benchmark.
  • Cache-clearing commands can be set up before each timing run.
  • Statistical outlier detection to detect interference from other programs and caching effects.
  • Export results to various formats: CSV, JSON, Markdown, AsciiDoc.
  • Parameterized benchmarks (e.g. vary the number of threads).
  • Cross-platform

Usage

Basic benchmark

To run a benchmark, you can simply call hyperfine <command>.... The argument(s) can be any shell command. For example:

hyperfine 'sleep 0.3'

Hyperfine will automatically determine the number of runs to perform for each command. By default, it will perform at least 10 benchmarking runs. To change this, you can use the -m/--min-runs option:

hyperfine --min-runs 5 'sleep 0.2' 'sleep 3.2'

Warmup runs and preparation commands

If the program execution time is limited by disk I/O, the benchmarking results can be heavily influenced by disk caches and whether they are cold or warm.

If you want to run the benchmark on a warm cache, you can use the -w/--warmup option to perform a certain number of program executions before the actual benchmark:

hyperfine --warmup 3 'grep -R TODO *'

Conversely, if you want to run the benchmark for a cold cache, you can use the -p/--prepare option to run a special command before each timing run. For example, to clear harddisk caches on Linux, you can run

sync; echo 3 | sudo tee /proc/sys/vm/drop_caches

To use this specific command with Hyperfine, call sudo -v to temporarily gain sudo permissions and then call:

hyperfine --prepare 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' 'grep -R TODO *'

Parameterized benchmarks

If you want to run a benchmark where only a single parameter is varied (say, the number of threads), you can use the -P/--parameter-scan option and call:

hyperfine --prepare 'make clean' --parameter-scan num_threads 1 12 'make -j {num_threads}'

This also works with decimal numbers. The -D/--parameter-step-size option can be used to control the step size:

hyperfine --parameter-scan delay 0.3 0.7 -D 0.2 'sleep {delay}'

This runs sleep 0.3, sleep 0.5 and sleep 0.7.

Shell functions and aliases

If you are using bash, you can export shell functions to directly benchmark them with hyperfine:

$ my_function() { sleep 1; }
$ export -f my_function
$ hyperfine my_function

If you are using a different shell, or if you want to benchmark shell aliases, you may try to put them in a separate file:

echo 'my_function() { sleep 1 }' > /tmp/my_function.sh
echo 'alias my_alias="sleep 1"' > /tmp/my_alias.sh
hyperfine 'source /tmp/my_function.sh; eval my_function'
hyperfine 'source /tmp/my_alias.sh; eval my_alias'

Export results

Hyperfine has multiple options for exporting benchmark results: CSV, JSON, Markdown (see --help text for details). To export results to Markdown, for example, you can use the --export-markdown option that will create tables like this:

Command Mean [s] Min [s] Max [s] Relative
find . -iregex '.*[0-9]\.jpg$' 2.275 ± 0.046 2.243 2.397 9.79 ± 0.22
find . -iname '*[0-9].jpg' 1.427 ± 0.026 1.405 1.468 6.14 ± 0.13
fd -HI '.*[0-9]\.jpg$' 0.232 ± 0.002 0.230 0.236 1.00

The JSON output is useful if you want to analyze the benchmark results in more detail. See the scripts/ folder for some examples.

Installation

Packaging status

On Ubuntu

Download the appropriate .deb package from the Release page and install it via dpkg:

wget https://github.com/sharkdp/hyperfine/releases/download/v1.11.0/hyperfine_1.11.0_amd64.deb
sudo dpkg -i hyperfine_1.11.0_amd64.deb

On Fedora

On Fedora, hyperfine can be installed from the official repositories:

dnf install hyperfine

On Alpine Linux

On Alpine Linux, hyperfine can be installed from the official repositories:

apk add hyperfine

On Arch Linux

On Arch Linux, hyperfine can be installed from the official repositories:

pacman -S hyperfine

On NixOS

On NixOS, hyperfine can be installed from the official repositories:

nix-env -i hyperfine

On Void Linux

Hyperfine can be installed via xbps

xbps-install -S hyperfine

On macOS

Hyperfine can be installed via Homebrew:

brew install hyperfine

Or you can install using MacPorts:

sudo port selfupdate
sudo port install hyperfine

On FreeBSD

Hyperfine can be installed via pkg:

pkg install hyperfine

On OpenBSD

doas pkg_add hyperfine

With conda

Hyperfine can be installed via conda from the conda-forge channel:

conda install -c conda-forge hyperfine

With cargo (Linux, macOS, Windows)

Hyperfine can be installed via cargo:

cargo install hyperfine

Make sure that you use Rust 1.43 or higher.

From binaries (Linux, macOS, Windows)

Download the corresponding archive from the Release page.

Alternative tools

Hyperfine is inspired by bench.

Integration with other tools

Chronologer is a tool that uses hyperfine to visualize changes in benchmark timings across your Git history.

Make sure to check out the scripts folder in this repository for a set of tools to work with hyperfine benchmark results.

Origin of the name

The name hyperfine was chosen in reference to the hyperfine levels of caesium 133 which play a crucial role in the definition of our base unit of time — the second.

License

Copyright (c) 2018-2020 The hyperfine developers

hyperfine is distributed under the terms of both the MIT License and the Apache License 2.0.

See the LICENSE-APACHE and LICENSE-MIT files for license details.

Comments
  • Add possibility to disable intermediate shell

    Add possibility to disable intermediate shell

    It would be great to have a way to disable the intermediate shell for very fast commands where we need a resolution that is better than 3 ms (typical standard deviation of the shell spawning time).

    This new mode would obviously not be able to benchmark complex commands like seq 1000 | factor, but we could use ~~https://crates.io/crates/shellwords~~ https://crates.io/crates/shell-words to be able to run commands like my_command --foo "some strong" --bar.

    feature-request 
    opened by sharkdp 19
  • consider an option to capture stdout without printing it

    consider an option to capture stdout without printing it

    By default, Hyperfine will attach Stdio::null() to the stdout of the command it runs. Some programs, like greps, will detect this, alter how they execute and run more quickly as a result. For example, in the case of GNU grep, if it detects that stdout is /dev/null, then it will behave as if the -q/--quiet flag were present. This means GNU grep can exit as soon as a match is found. This is a perfectly legitimate optimization since its only observable behavior is its exit code, and its exit code is only influenced by whether a match exists or not (assuming no errors or interrupts occur).

    What this means is that, e.g., time grep foo file | wc -l and hyperfine 'grep foo file' aren't quite doing the same thing. That's no fault of Hyperfine, however, it would be nice to have a way to easily work around it. As of today, here are some work-arounds:

    • Use the --show-output flag, which captures stdout. But this also prints stdout, which can be undesirable if there is a lot of output.
    • In the case of grep specifically, use a query that is not expected to match and give Hyperfine the --ignore-failure flag.

    A better work-around might be to have some other flag, maybe --capture-output, that causes Hyperfine to capture stdout but then do nothing with it.

    I was motivated to write this feature request based on an errant benchmark found in the wild: https://old.reddit.com/r/commandline/comments/mibsw8/alternative_to_grep_less/gt90lmm/

    feature-request 
    opened by BurntSushi 13
  • Use clap to generate shell completions

    Use clap to generate shell completions

    Re: #288

    Hopefully this is analogous to sharkdp/fd#66. Changes to ci/before_deploy.bash mirror sharkdp/fd when copying shell completions across.

    This is the first PR I've submitted with Rust -- still learning OSS and Rust, so any feedback and I'll fix up what I can.

    opened by diatoner 13
  • PR for relative speed as part of exported data?

    PR for relative speed as part of exported data?

    The CLI output has a really great summary formatted like:

    Summary
      'curl google.com' ran
        2.93 ± 2.29 times faster than 'wget google.com'
    

    The relative speed provided (2.93) is, in my opinion, a major take-a-way when reading speed tests. Unfortunately, this information is not (directly) part of the exported data for markdown, json and csv.

    Would you be willing to accept a PR adding information about the relative speed to the exported data?

    opened by mathiasrw 13
  • Hyperfine not recognising LC_ALL=C or globs

    Hyperfine not recognising LC_ALL=C or globs

    I'm using a slightly complicated for loop combined with hyperfine to compare the relative speeds of different versions of grep. The lines beginning with tee simply redirect the contents of both STDOUT and STDERR - in this case, all output from hyperfine - to a plain text file using tee.

    for i in {1..15}; \
    do
    (hyperfine -m 10 --show-output -p 'sync64 -nobanner' 'LC_ALL=C grep -i "ajndoandajskaskaksnaodnasnakdnaosnaond" *.txt') 2>&1 |
    tee -a "../Tests/grep Test 5 (Hyperfine Full Directory w. Sync).txt"; \
    (hyperfine -m 10 --show-output -p 'sync64 -nobanner' 'LC_ALL=C grep -Fi "ajndoandajskaskaksnaodnasnakdnaosnaond" *.txt') 2>&1 |
    tee -a "../Tests/grep -F Test 5 (Hyperfine Full Directory w. Sync).txt"; \
    (hyperfine -m 10 --show-output -p 'sync64 -nobanner' 'rg -i "ajndoandajskaskaksnaodnasnakdnaosnaond" *.txt') 2>&1 |
    tee -a "../Tests/ripgrep Test 5 (Hyperfine Full Directory w. Sync).txt"; \
    done;
    

    However, it fails with the following output:

    Benchmark #1: LC_ALL=C grep -i "ajndoandajskaskaksnaodnasnakdnaosnaond" *.txt

    Flushing: B C D 'LC_ALL' is not recognized as an internal or external command, operable program or batch file. Error: Command terminated with non-zero exit code. Use the '-i'/'--ignore-failure' option if you want to ignore this. Alternatively, use the '--show-output' option to debug what went wrong. Benchmark #1: LC_ALL=C grep -Fi "ajndoandajskaskaksnaodnasnakdnaosnaond" *.txt

    Flushing: B C D 'LC_ALL' is not recognized as an internal or external command, operable program or batch file. Error: Command terminated with non-zero exit code. Use the '-i'/'--ignore-failure' option if you want to ignore this. Alternatively, use the '--show-output' option to debug what went wrong. Benchmark #1: rg -i "ajndoandajskaskaksnaodnasnakdnaosnaond" *.txt

    Flushing: B C D *.txt: The filename, directory name, or volume label syntax is incorrect. (os error 123) Error: Command terminated with non-zero exit code. Use the '-i'/'--ignore-failure' option if you want to ignore this. Alternatively, use the '--show-output' option to debug what went wrong.

    Each command in the loop passed through hyperfine fails with an error, with two of them being distinct errors here. The first two suggest that hyperfine doesn't support commands prefixed with the LC=ALL_C environment variable, and the second appears to have trouble with the glob in *.txt. Does hyperfine not support these, or am I using it/them incorrectly?

    question 
    opened by hashimaziz1 13
  • Provide finer-grained statistics

    Provide finer-grained statistics

    It'd be nice to be able to show other statistics such as the 95th percentile or median runtime. HdrHistogram can record this with relatively little overhead, and there's a pretty good official Rust implementation here (I'm one of the maintainers).

    feature-request question 
    opened by jonhoo 12
  • Implement shell bypassing

    Implement shell bypassing

    As commented on #336, I've made an initial patch to allow one to disable the shell intermediation, and just parse the command as a list of words (perhaps with quoting).

    As @sharkdp mentioned, I currently use --shell '' to disable the shell, but that is quite ugly UX; however I wanted to keep the patch as small as possible.

    Also, given that Rust doesn't easily allow cross-compile I can't cargo check --target ...windows..., so although I've tried to get the code right on Windows most likely there is a compile error.


    As mentioned in the issue at least in terms of outputs, it seems that hyperfine already handles the shell overhead and subtracts it from the outputs. Thus this patch is mostly to speed-up the overall execution by not passing through the shell. It shouldn't change the measured values.

    opened by cipriancraciun 11
  • Add possibility to pass float values or range to --parameter-scan

    Add possibility to pass float values or range to --parameter-scan

    When using hyperfine, I am missing the possibility of specifying a range or floating point values on --parameter-scan:

    hyperfine --parameter-scan i 1 10 2 "sleep {i}"
    

    or

    hyperfine --parameter-scan i 0.1 1.0 "sleep {i}"
    
    feature-request good first issue 
    opened by guilhermeleobas 11
  • output median

    output median

    I'd like to measure the median instead of the mean. Currently, I have a perl script that reads the JSON output and calculates the quartiles. But that's not very convenient. Do you think it's good to output the median (instead of, or in addition to) the mean?

    Here are some points:

    • If cron fires during a benchmark, the mean might be off
    • It seems many academic papers prefer the median.

    PS: Thanks for hyperfine. It's much more convenient than time.

    feature-request question 
    opened by hosewiejacke 10
  • [FR] Allow caching of the shell startup time

    [FR] Allow caching of the shell startup time

    I don't know what algorithm hyperfine uses to determine how many samples to draw for determining the shell startup time, but when I use my custom "shell" that sends the code to my (local) REST server to be evaled, hyperfine takes a few seconds to determine the shell startup time (the commands execute fast themselves). I think that if hyperfine just caches this measurement for later reuse, it would make this workflow that much faster.

    feature-request 
    opened by NightMachinery 9
  • Multiple markdowns from different tests

    Multiple markdowns from different tests

    Hi;

    I am using hyperfine to test some of my software against some other software and I would like to know if it's possible to get a feature that would allow me (or us) to merge different markdown files into one; or to have the possibility to generate one single markdown file instead that would have two columns referencing the data for each tests (would make that easier if you are trying to compare two different test set's results)

    (I am not sure if my wall of text is clear and understand, please let me know and I will try to explain what I have on my mind a little bit better)

    opened by InfRandomness 9
  • Display parameter values if they aren't used in the benchmarked command

    Display parameter values if they aren't used in the benchmarked command

    Not sure what the UI should look like, but if you only use parameter values in a prepare command, you currently have no way of knowing which benchmarks used which parameters.

    opened by SUPERCILEX 0
  • Bump assert_cmd from 2.0.5 to 2.0.7

    Bump assert_cmd from 2.0.5 to 2.0.7

    Bumps assert_cmd from 2.0.5 to 2.0.7.

    Changelog

    Sourced from assert_cmd's changelog.

    [2.0.7] - 2022-12-02

    [2.0.6] - 2022-11-04

    Fixes

    • Hide internal-only optional dependencies
    Commits
    • 5b0e522 chore: Release assert_cmd version 2.0.7
    • 2cd8f58 Merge pull request #156 from assert-rs/renovate/actions-setup-python-4.x
    • fe19d0c chore(deps): update actions/setup-python action to v4
    • 130c918 Merge pull request #155 from assert-rs/renovate/actions-checkout-3.x
    • cab9d73 Merge pull request #157 from assert-rs/renovate/pre-commit-action-3.x
    • f8cb899 Merge pull request #158 from assert-rs/renovate/swatinem-rust-cache-2.x
    • 8e8c6ff Merge pull request #159 from assert-rs/renovate/concolor-0.x
    • 6df1cc7 chore(deps): update rust crate concolor to 0.0.11
    • 5add4fb chore: Iterate on renovate
    • f257e9b chore(deps): update swatinem/rust-cache action to v2
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump serde from 1.0.147 to 1.0.152

    Bump serde from 1.0.147 to 1.0.152

    Bumps serde from 1.0.147 to 1.0.152.

    Release notes

    Sourced from serde's releases.

    v1.0.152

    • Documentation improvements

    v1.0.151

    • Update serde::{ser,de}::StdError to re-export core::error::Error when serde is built with feature="std" off and feature="unstable" on (#2344)

    v1.0.150

    • Relax some trait bounds from the Serialize impl of HashMap and BTreeMap (#2334)
    • Enable Serialize and Deserialize impls of std::sync::atomic types on more platforms (#2337, thanks @​badboy)

    v1.0.149

    • Relax some trait bounds from the Serialize impl of BinaryHeap, BTreeSet, and HashSet (#2333, thanks @​jonasbb)

    v1.0.148

    • Support remote derive for generic types that have private fields (#2327)
    Commits
    • ccf9c6f Release 1.0.152
    • b25d0ea Link to Hjson data format
    • 4f4557f Link to bencode data format
    • bf400d6 Link to serde_tokenstream data format
    • 4d2e36d Wrap flexbuffers bullet point to 80 columns
    • df6310e Merge pull request #2347 from dtolnay/docsrs
    • 938ab5d Replace docs.serde.rs links with intra-rustdoc links
    • ef5a0de Point documentation links to docs.rs instead of docs.serde.rs
    • 5d186c7 Opt out -Zrustdoc-scrape-examples on docs.rs
    • 44bf363 Release 1.0.151
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Suspiciously low User and Sys time

    Suspiciously low User and Sys time

    I am benchmarking the mold linker on an in-house application, using hyperfine 1.15.0 (from conda-forge) to drive the benchmarks.

    While the wall-clock times reported seem correct, the "User" and "Sys" times seem way too low (and also are not impacted by the number of threads used). Example:

    Benchmark 1: mold180 (1 threads)
      Time (mean ± σ):     24.302 s ±  0.477 s    [User: 0.392 s, System: 0.234 s]
      Range (min … max):   23.836 s … 24.789 s    3 runs
    
    Benchmark 2: mold180 (2 threads)
      Time (mean ± σ):     14.871 s ±  0.597 s    [User: 0.385 s, System: 0.246 s]
      Range (min … max):   14.351 s … 15.523 s    3 runs
    

    The benchmark invokes the linker by calling ninja MyBinary (with --setup and --prepare used to make sure ninja does not much else than invoking the linker)

    Specs of the machine used to run the benchmark:

    • OS: Ubuntu 20.04 with kernel 5.4
    • CPU: Dual socket Intel Xeon Gold 5218
    • RAM: 128GB
    opened by moncefmechri 0
Releases(v1.15.0)
Owner
David Peter
David Peter
Statistics-driven benchmarking library for Rust

Criterion.rs Statistics-driven Microbenchmarking in Rust Getting Started | User Guide | Master API Docs | Released API Docs | Changelog | | Criterion.

Brook Heisler 3.1k Jan 8, 2023
Experimental one-shot benchmarking/profiling harness for Rust

Iai Experimental One-shot Benchmark Framework in Rust Getting Started | User Guide | Released API Docs | Changelog Iai is an experimental benchmarking

Brook Heisler 409 Dec 25, 2022
An intrusive flamegraph profiling tool for rust.

FLAME A cool flamegraph library for rust Flamegraphs are a great way to view profiling information. At a glance, they give you information about how m

null 631 Jan 3, 2023
Benchmark tool for comparing with other runtimes.

Monoio Benchmark TCP ping-pong(not echo) is a common benchmark for network applications. We will use 1K ping-pong to test performance of different run

null 6 Oct 10, 2022
It is not about Keanu Reeves but a benchmark tool.

benchman Features Focus on one-shot benchmark RAII-style Statistics (Average, Median, 95% and 99% percentile) Colored output Tagging Nesting Motivatio

Akira Hayakawa 4 Feb 12, 2022
A http server benchmark tool written in rust 🦀

rsb - rust benchmark rsb is a http server benchmark tool written in rust. The development of this tool is mainly inspired by the bombardier project, a

Michael 45 Apr 10, 2023
A command-line benchmarking tool

hyperfine 中文 A command-line benchmarking tool. Demo: Benchmarking fd and find: Features Statistical analysis across multiple runs. Support for arbitra

David Peter 14.1k Jan 6, 2023
A command-line benchmarking tool

hyperfine 中文 A command-line benchmarking tool. Demo: Benchmarking fd and find: Features Statistical analysis across multiple runs. Support for arbitra

David Peter 14.1k Jan 4, 2023
Small command-line tool to switch monitor inputs from command line

swmon Small command-line tool to switch monitor inputs from command line Installation git clone https://github.com/cr1901/swmon cargo install --path .

William D. Jones 5 Aug 20, 2022
Benchmarking web frameworks written in rust with rewrk tool.

Web Framework Benchmarks Benchmarking web frameworks written in rust with rewrk tool.

null 103 Dec 8, 2022
A unix "time" like benchmarking tool on steroids

benchie Usage Binary Once Rust is installed (see step 1 in "Toolchain Setup"), you can easily install the latest version of benchie with: $ cargo inst

benchie 3 May 6, 2022
ObfusEval is the benchmarking tool to evaluate the reliability of the code obfuscating transformation.

ObfusEval ObfusEval is the benchmarking tool to evaluate the reliability of the code obfuscating transformation. The following two metrics related the

Software Engineering Lab @ NAIST 4 Dec 15, 2022
Statistics-driven benchmarking library for Rust

Criterion.rs Statistics-driven Microbenchmarking in Rust Getting Started | User Guide | Master API Docs | Released API Docs | Changelog | | Criterion.

Brook Heisler 3.1k Jan 8, 2023
Experimental one-shot benchmarking/profiling harness for Rust

Iai Experimental One-shot Benchmark Framework in Rust Getting Started | User Guide | Released API Docs | Changelog Iai is an experimental benchmarking

Brook Heisler 409 Dec 25, 2022
A tiny, super simple and portable benchmarking library.

benchmark-simple A tiny benchmarking library for Rust. Trivial to use Works pretty much everywhere, including WebAssembly (WASI, but also in-browser)

Frank Denis 3 Dec 26, 2022
clockchain is a system for benchmarking smart contract execution times across blockchains.

Clockchain Clockchain is a research tool for benchmarking smart contract execution times across blockchains using Arcesco-- a block-chain agnostic ins

Zeke Medley 7 Oct 3, 2022
clockchain is a system for benchmarking smart contract execution times across blockchains.

Clockchain Clockchain is a research tool for benchmarking smart contract execution times across blockchains using Arcesco-- a block-chain agnostic ins

zeke 7 Oct 3, 2022
Benchmarking manual implementation of memcpy in Rust

Manual memcpy Benchmark Benchmarks that compare copying data between two Vec<u8>s using std::slice::copy_from_slice and a loop that copies one byte at

Adam Bratschi-Kaye 0 Feb 2, 2022
Benchmarking C, Python, and Rust on the "sp" problem

Fast SP Various implementations of the problem in this blog post. Requirements To run this, you will need Rust Nightly and Python 3.8+ with numpy. Rus

Eddie Antonio Santos 2 Jul 13, 2023
Pink is a command-line tool inspired by the Unix man command.

Pink is a command-line tool inspired by the Unix man command. It displays custom-formatted text pages in the terminal using a subset of HTML-like tags.

null 3 Nov 2, 2023