Fast suffix arrays for Rust (with Unicode support).

Overview

suffix

Fast linear time & space suffix arrays for Rust. Supports Unicode!

Build status

Dual-licensed under MIT or the UNLICENSE.

Documentation

https://docs.rs/suffix

If you just want the details on how construction algorithm used, see the documentation for the SuffixTable type. This is where you'll find info on exactly how much overhead is required.

Installation

This crate works with Cargo and is on crates.io. The package is regularly updated. Add it to your Cargo.toml like so:

[dependencies]
suffix = "1.2"

Examples

Usage is simple. Just create a suffix array and search:

use suffix::SuffixTable;

fn main() {
  let st = SuffixTable::new("the quick brown fox was quick.");
  assert_eq!(st.positions("quick"), &[4, 24]);
}

There is also a command line program, stree, that can be used to visualize suffix trees:

git clone git://github.com/BurntSushi/suffix
cd suffix/stree_cmd
cargo build --release
./target/release/stree "banana" | dot -Tpng | xv -

And here's what it looks like:

"banana" suffix tree

Status of implementation

The big thing missing at the moment is a generalized suffix array. I started out with the intention to build them into the construction algorithm, but this has proved more difficult than I thought.

A kind-of-sort-of compromise is to append your distinct texts together, and separate them with a character that doesn't appear in your document. (This is technically incorrect, but maybe your documents don't contain any NUL characters.) During construction of this one giant string, you should record the offsets of where each document starts and stops. Then build a SuffixTable with your giant string. After searching with the SuffixTable, you can find the original document by doing a binary search on your list of documents.

I'm currently experimenting with different techniques to do this.

Benchmarks

Here are some very rough benchmarks that compare suffix table searching with searching in the using standard library functions. Note that these benchmarks explicitly do not include the construction of the suffix table. The premise of a suffix table is that you can afford to do that once---but you hope to gain much faster queries once you do.

test search_scan_exists_many            ... bench:       2,964 ns/iter (+/- 180)
test search_scan_exists_one             ... bench:          19 ns/iter (+/- 1)
test search_scan_not_exists             ... bench:      84,645 ns/iter (+/- 3,558)
test search_suffix_exists_many          ... bench:         228 ns/iter (+/- 65)
test search_suffix_exists_many_contains ... bench:         102 ns/iter (+/- 10)
test search_suffix_exists_one           ... bench:         162 ns/iter (+/- 13)
test search_suffix_exists_one_contains  ... bench:           8 ns/iter (+/- 0)
test search_suffix_not_exists           ... bench:         177 ns/iter (+/- 21)
test search_suffix_not_exists_contains  ... bench:          50 ns/iter (+/- 6)

The "many" benchmarks test repeated queries that match. The "one" benchmarks test a single query that matches. The "not_exists" benchmarks test a single query that does not match. Finally, the "contains" benchmark test existence rather finding all positions.

One thing you might take away from here is that you'll get a very large performance boost if many of your queries don't match. A linear scan takes a long time to fail!

And here are some completely useless benchmarks on suffix array construction. They compare the linear time algorithm with the naive construction algorithm (call sort on all suffixes, which is O(n^2 * logn)).

test naive_dna_medium                   ... bench:  22,307,313 ns/iter (+/- 939,557)
test naive_dna_small                    ... bench:   1,785,734 ns/iter (+/- 43,401)
test naive_small                        ... bench:         228 ns/iter (+/- 10)
test sais_dna_medium                    ... bench:   7,514,327 ns/iter (+/- 280,544)
test sais_dna_small                     ... bench:     712,938 ns/iter (+/- 34,730)
test sais_small                         ... bench:       1,038 ns/iter (+/- 58)

These benchmarks might make you say, "Whoa, the special algorithm isn't that much faster." That's because the data just isn't big enough. And when it is big enough, a micro benchmark is useless. Why? Because using the naive algorithm will just burn your CPUs until the end of the time.

It would be more useful to compare this to other suffix array implementations, but I haven't had time yet. Moreover, most (all?) don't support Unicode and instead operate on bytes, which means they aren't paying the overhead of decoding UTF-8.

Comments
  • Recommendations for dictionary of strings with wildcards?

    Recommendations for dictionary of strings with wildcards?

    I am implementing an MQTT Broker. After connection, a client subscribes to one or more topics, which might contain one of two wildcard types.
    A regular topic might look like foo/bar/baz/blah .
    A # wildcard would match the entirety of the remaining string, a + wildcard would match only a single path segment, e.g. foo/bar/+/blah. When a message is published to a topic, the broker would find all matching subscriptions and deliver that message accordingly. This is a bit of a twist on the traditional string search problem, because the wildcards are in the dictionary instead of the query string.

    It is expected that the broker should be able to handle 10,000 of connections, with each connection subscribing to an average 20 topics each. It is also expected that there would be many duplicate connections.

    I can think of one approach, and I'm not thrilled by it, and that is to build the table using only the path segments themselves, and perform successive searches using only the segments of the path segments of the query. Then I could use the resulting indices into the table as keys into a map of subscribers.

    It would still be a bit tricky, because it would require an n-gram search for the resulting subscribers.

    e.g. subscribers:
    A: is at pizza/+/meats/sausage C: is at pizza/toppings/fruits/tomatoes D: is at pizza/#

    Naively**, the dictionary is

    pizza/+/meats/sausage$pizza/toppings/fruits/tomatoes$pizza/#
    01234567890123456789012345678901234567890123456789
    

    a match at 0, 24,31, 36 would find subscriber A. A match at 0 would find subscriber D.
    A message is published at pizza/toppings/fruits/tomatoes

    ** I big optimization here is that the dictionary doesn't need to duplicate path segments, so we could ensure they're unique. Given the nature of mqtt subscriptions, this would be a big win.

    I'm thinking that might be too much effort trying to wedge the suffix table approach where it doesn't fit, and maybe I'm better off building a suffix tree where I can explicitly create and handle wildcard nodes. Also a bitmap index seems promising.

    Even simpler but perhaps less performant would be to just build a massive Regex out of each subscription string. (might work for 100 subscriptions, but get unwieldy at 1000+, which is certainly in the realm of possibility.

    p.s., I just found http://blog.burntsushi.net/transducers/ which I am attempting to digest.

    opened by rrichardson 12
  • Performances

    Performances

    Hi,

    I needed a high performance suffix array lib some time ago for a bioinformatics project, and I ended up working with a crude FFI with divsufsort.

    Just so you know, its performances while creating the array is blowing suffix out of the water. So feel free to take a glance if it interests you or to close this issue otherwise ;)

    opened by delehef 7
  • approaching 1.0

    approaching 1.0

    This crate enjoys a small API and has had some minor breaking changes over its lifetime, but given its modest functionality, I don't foresee a major API refactoring in its near future. Therefore, I'd like to move this to 1.0.

    When I initially built this crate, I did have a grander vision for building a more complete suffix array implementation. In particular, while this crate provides a nearly optimal construction algorithm implementation, it does not provide an optimal search algorithm implementation. Search should be implementable in O(p+logn) time (where p ~ len(query) and n ~ len(haystack)), but is currently O(p*logn). Improving the bound requires some sophisticated implementation tricks that have proved difficult to extract from the literature. I offered a bounty on a StackOverflow question and got an answer, but I haven't digested it yet. Nevertheless, a plain suffix table is plenty useful in its own right, so a 1.0 release shouldn't be blocked on further improvements even if it requires a rethink of the public API.

    help wanted 
    opened by BurntSushi 6
  • Any interest in string B trees?

    Any interest in string B trees?

    After his linear time suffix array work, Pang Ko came up with a string B-tree data structure for cache oblivious access:

    https://lib.dr.iastate.edu/rtd/15942/

    Any interest in a PR for a string B-tree or should i make a new repository? Ideally it would handle RAM, local disk, and a network store like AWS S3. I know at least one team at Amazon that would use this internally at scale. Toss in a Bloom filter on the network store shards and it could save obscene amounts of IO for cold reads.

    opened by chadbrewbaker 2
  • Ownership of the table

    Ownership of the table

    I had a question, or perhaps suggestion, but I am new to Rust, so I might be missing something: the SuffixTable struct stores the text as Cow, but the actual table/array as a vector that is always owned. I wondered if you considered making the table Cow as well.

    The immediate benefit that I see is that one could write the constructed table to disk and then later use the SuffixTable using a memory mapped table via from_parts.

    I made a quick modification to use Cow for the table member and all unit tests pass.

    opened by danieldk 2
  • Add `any_position` to get arbitrary one of the positions

    Add `any_position` to get arbitrary one of the positions

    This is 4.5× faster than positions for my use case in the following representative benchmark.

    Use case: to deal with serde Deserializers that do not support producing borrowed data (&'de str) we can easily just look up every string produced by the deserializer in the original input, and thereby turn a transient string (&str) into a borrowed string (&'de str) for each one found anywhere in the input. Specifically for serde_yaml, the performance of SuffixTable::new does not matter because we can run it in parallel with yaml-rust's Parser::load step, which will always take longer than the SuffixTable construction.

    [If you happen to know: is suffix array even the right data structure for this use case? A key characteristic of the use case is that Σm = O(n) i.e. I only ever look up non-overlapping substrings of the original text, so the sum of lengths of all queries is no more than n. With a suffix array each query is O(m log n) so total O(n log n). From some naive napkin sketches I believe O(n) is achievable with a hash-based structure, with amortized O(m) time per query. I'm not really interested in inventing a novel algorithm though if this is a solved problem.]

    #![feature(test)]
    extern crate test;
    
    use serde_json::Value;
    use suffix::SuffixTable;
    
    fn setup() -> (String, Vec<String>) {
        const JSON: &str = include_str!("twitter.json"); // https://github.com/serde-rs/json-benchmark/tree/master/data
        let json_value: Value = serde_json::from_str(JSON).unwrap();
        let yaml = serde_yaml::to_string(&json_value).unwrap();
    
        let mut strings = Vec::new();
        let mut stack = Vec::new();
        stack.push(json_value);
        while let Some(value) = stack.pop() {
            match value {
                Value::Null | Value::Bool(_) | Value::Number(_) => {}
                Value::String(string) => strings.push(string),
                Value::Array(array) => stack.extend(array),
                Value::Object(object) => {
                    for (key, value) in object {
                        strings.push(key);
                        stack.push(value);
                    }
                }
            }
        }
    
        (yaml, strings)
    }
    
    #[bench]
    fn bench_positions(b: &mut test::Bencher) {
        let (yaml, strings) = setup();
        let index = SuffixTable::new(&yaml);
        b.iter(|| {
            for string in &strings {
                test::black_box(index.positions(string));
            }
        });
    }
    
    #[bench]
    fn bench_any_position(b: &mut test::Bencher) {
        let (yaml, strings) = setup();
        let index = SuffixTable::new(&yaml);
        b.iter(|| {
            for string in &strings {
                test::black_box(index.any_position(string));
            }
        });
    }
    
    opened by dtolnay 1
  • Solving the longest common substring (LCS) problem

    Solving the longest common substring (LCS) problem

    I've been recently teaching myself bioinformatics and algorithms. I got really confused when while delving into suffix arrays, particularly when it comes to the use of sentinels.

    As suggested by WillamFiset in his video, a generalized suffix array using different types of sentinels must be used to solve the LCS problem. However, I don't really see the need for sentinels. Indeed, according to (Shrestha et al., 2014) that's cited by you, sentinels are crucial by suffix trees but are not absolutely needed for suffix arrays, though some algorithms for constructing suffix arrays need them.

    Based on WillamFiset's video (which uses sentinels), I devised the following algorithm for solving the LCS problem which uses this crate (that does not use sentinels and is not so-called 'generalized suffix array').

    So the question is, what exactly defines a generalized suffix array, and how is a generalized suffix array (with multiple sentinels) is superior to the following algorithm in solving the LCS problem?

    (Or probably the following algorithm itself is an implementation of the generalized suffix array without sentinels? (since I saw from some sources that a 'generalized suffix array/tree' is just a fancy name for a suffix array/tree to which multiple strings can be added))

    extern crate suffix;
    use suffix::SuffixTable;
    
    fn main() {
        // find the longest common substring (LCS) of the following 5 strings
        let strs =
            ["ZYABCAGB", "BCAGDTZYY", "DACAGZZYSC", "CAGYZYSAU", "CAZYUCAGF"];
        let number_of_strings = strs.len();
        let mut boundaries = Vec::new();
        let mut concatenated = String::new();
        for s in strs.iter() {
            concatenated.push_str(*s);
            boundaries.push(concatenated.len());
        }
    
        let sa = SuffixTable::new(&concatenated);
    
        let pos = sa.table();
        let lcp = sa.lcp_lens();
    
        // for demonstration.
        // SA: suffix array
        // LCP: LCP array
        // n: the index of the original string this character belongs to
        // suffix: the suffix (of the concatenated string)
        println!(" SA LCP n Suffix");
        println!("--------------------");
        for (&p, &l) in pos.iter().zip(lcp.iter()) {
            let n = string_number(p, &boundaries);
            println!("{:>3} {:>3} {} {}", p, l, n, &concatenated[(p as usize)..])
        }
        println!("{:?}", boundaries);
    
        let mut lcs_len = 0u32;
        let mut lcs_pos: u32 = 0;
    
        'outer: for (win_p, win_l) in
            pos.windows(number_of_strings).zip(lcp.windows(number_of_strings))
        {
            // examine if each window contains substrings coming from different original strings
            // use a vector of booleans to track whether a substring has been included.
            // upon duplication, abort and continue scanning the next frame
            let mut included = vec![false; number_of_strings];
            // this step should be O(number_of_strings) ?
            for &p in win_p {
                let n = string_number(p, &boundaries);
                if included[n] {
                    continue 'outer;
                } else {
                    included[n] = true;
                }
            }
            // this window contains one and only one suffixes from each original strings;
            // calculate the LCS within this window
            let m = win_l.iter().skip(1).min().unwrap(); // win_l always has length 3
            let this_cs_len = *m;
            let this_cs_pos = win_p[0];
            let this_cs = concatenated
                .chars()
                .skip(this_cs_pos as usize)
                .take(this_cs_len as usize)
                .collect::<String>();
            if this_cs.len() > 0 {
                println!("Found common substring: {}", this_cs);
            }
    
            if *m > lcs_len {
                lcs_len = this_cs_len;
                lcs_pos = this_cs_pos;
            }
        }
    
        println!(
            "LCS: {:?}",
            concatenated
                .chars()
                .skip(lcs_pos as usize)
                .take(lcs_len as usize)
                .collect::<String>()
        );
        println!("LCS length: {}", lcs_len);
    }
    
    fn string_number(position: u32, boundaries: &[usize]) -> usize {
        match boundaries.binary_search(&(position as usize)) {
            Ok(idx) => idx + 1,
            Err(idx) => idx,
        }
    }
    
    

    which prints:

    SA LCP n Suffix
    --------------------
      2   0 0 ABCAGBBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     18   1 2 ACAGZZYSCCAGYZYSAUCAZYUCAGF
      5   1 0 AGBBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     10   2 1 AGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     42   2 4 AGF
     28   2 3 AGYZYSAUCAZYUCAGF
     20   2 2 AGZZYSCCAGYZYSAUCAZYUCAGF
     34   1 3 AUCAZYUCAGF
     37   1 4 AZYUCAGF
      7   0 0 BBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
      3   1 0 BCAGBBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
      8   4 1 BCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
      4   0 0 CAGBBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
      9   3 1 CAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     41   3 4 CAGF
     27   3 3 CAGYZYSAUCAZYUCAGF
     19   3 2 CAGZZYSCCAGYZYSAUCAZYUCAGF
     36   2 4 CAZYUCAGF
     26   1 2 CCAGYZYSAUCAZYUCAGF
     17   0 2 DACAGZZYSCCAGYZYSAUCAZYUCAGF
     12   1 1 DTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     44   0 4 F
      6   0 0 GBBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     11   1 1 GDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     43   1 4 GF
     29   1 3 GYZYSAUCAZYUCAGF
     21   1 2 GZZYSCCAGYZYSAUCAZYUCAGF
     33   0 3 SAUCAZYUCAGF
     25   1 2 SCCAGYZYSAUCAZYUCAGF
     13   0 1 TZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     40   0 4 UCAGF
     35   3 3 UCAZYUCAGF
      1   0 0 YABCAGBBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     16   1 1 YDACAGZZYSCCAGYZYSAUCAZYUCAGF
     32   1 3 YSAUCAZYUCAGF
     24   2 2 YSCCAGYZYSAUCAZYUCAGF
     39   1 4 YUCAGF
     15   1 1 YYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     30   1 3 YZYSAUCAZYUCAGF
      0   0 0 ZYABCAGBBCAGDTZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     31   2 3 ZYSAUCAZYUCAGF
     23   3 2 ZYSCCAGYZYSAUCAZYUCAGF
     38   2 4 ZYUCAGF
     14   2 1 ZYYDACAGZZYSCCAGYZYSAUCAZYUCAGF
     22   1 2 ZZYSCCAGYZYSAUCAZYUCAGF
    [8, 17, 27, 36, 45]
    Found common substring: A
    Found common substring: AG
    Found common substring: CAG
    Found common substring: G
    Found common substring: Y
    Found common substring: ZY
    LCS: "CAG"
    LCS length: 3
    
    opened by TianyiShi2001 1
  • Trying to access suffix on japanese text causes panic

    Trying to access suffix on japanese text causes panic

    Text example

    この麗しき御方こそが、甲斐この麗しき御方こそが、甲斐源氏の本この麗しき御方こそが、甲斐源氏の本流たる武この麗しき御方こそが、甲斐源氏の本流たる武田家の第この麗しき御方こそが、甲斐源氏の本流たる武田家の第十九代目この麗しき御方こそが、甲斐源氏の本流たる武田家の第十九代目の当主。武この麗しき御方こそが、甲斐源氏の本流たる武田家の第十九代目の当主。武田信玄そこの麗しき御方こそが、甲斐源氏の本流たる武田家の第十九代目の当主。武田信玄その人だ。この麗しき御方こそが、甲斐源氏の本流たる武田家の第十九代目の当主。武田信玄その人だ。

    My test case is pretty simple:

    let suffix_table = suffix::SuffixTable::new(text) //text is variable that contains quoted string above
    println!("suffix[1]={}", suffix_table.suffix(1).to_string())
    

    Code will panic:

    panicked at 'byte index 70 is not a char boundary; it is inside '、' (bytes 69..72) of この麗しき御方こそが、甲斐この麗しき御方こそが、甲斐源氏の本この麗しき御方こそが、甲斐 源氏の本流たる武この麗しき御方こそが、甲斐源氏の本流たる武田家の第この麗しき御方こそ[...]', src\libcore\str\mod.rs:2179

    opened by DoumanAsh 1
  • Fix invariant check in from_parts.

    Fix invariant check in from_parts.

    The suffix array table has the same length as the text in bytes. The invariant check in from_parts, however, checks that the suffix array size is the same as the number of characters. This invariant holds when a string only contains ASCII characters, but it fails otherwise.

    Modify the check to use the length of the string in bytes.

    opened by danieldk 1
  • Example update

    Example update

    • Added 2 examples and an examples/ directory
    • updated the example found in the readme so it compiles
    • added a step to the github actions ci. examples/basic will run as a confirmation that the readme example works
    opened by maxdobeck 0
Owner
Andrew Gallant
I love to code.
Andrew Gallant
Text calculator with support for units and conversion

cpc calculation + conversion cpc parses and evaluates strings of math, with support for units and conversion. 128-bit decimal floating points are used

Kasper 82 Jan 4, 2023
Font independent text analysis support for shaping and layout.

lipi Lipi (Sanskrit for 'writing, letters, alphabet') is a pure Rust crate that provides font independent text analysis support for shaping and layout

Chad Brokaw 12 Sep 22, 2022
Rust edit distance routines accelerated using SIMD. Supports fast Hamming, Levenshtein, restricted Damerau-Levenshtein, etc. distance calculations and string search.

triple_accel Rust edit distance routines accelerated using SIMD. Supports fast Hamming, Levenshtein, restricted Damerau-Levenshtein, etc. distance cal

Daniel Liu 75 Jan 8, 2023
A fast, low-resource Natural Language Processing and Text Correction library written in Rust.

nlprule A fast, low-resource Natural Language Processing and Error Correction library written in Rust. nlprule implements a rule- and lookup-based app

Benjamin Minixhofer 496 Jan 8, 2023
A fast implementation of Aho-Corasick in Rust.

aho-corasick A library for finding occurrences of many patterns at once with SIMD acceleration in some cases. This library provides multiple pattern s

Andrew Gallant 662 Dec 31, 2022
Find files (ff) by name, fast!

Find Files (ff) Find Files (ff) utility recursively searches the files whose names match the specified RegExp pattern in the provided directory (defau

Vishal Telangre 310 Dec 29, 2022
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility. Main features: Train new vocabularies and tok

Hugging Face 6.2k Jan 5, 2023
Fast and easy random number generation.

alea A zero-dependency crate for fast number generation, with a focus on ease of use (no more passing &mut rng everywhere!). The implementation is bas

Jeff Shen 26 Dec 13, 2022
Vaporetto: a fast and lightweight pointwise prediction based tokenizer

?? VAporetto: POintwise pREdicTion based TOkenizer Vaporetto is a fast and lightweight pointwise prediction based tokenizer. Overview This repository

null 184 Dec 22, 2022
Blazingly fast framework for in-process microservices on top of Tower ecosystem

norpc = not remote procedure call Motivation Developing an async application is often a very difficult task but building an async application as a set

Akira Hayakawa 15 Dec 12, 2022
Composable n-gram combinators that are ergonomic and bare-metal fast

CREATURE FEATUR(ization) A crate for polymorphic ML & NLP featurization that leverages zero-cost abstraction. It provides composable n-gram combinator

null 3 Aug 25, 2022
A simple and fast linear algebra library for games and graphics

glam A simple and fast 3D math library for games and graphics. Development status glam is in beta stage. Base functionality has been implemented and t

Cameron Hart 953 Jan 3, 2023
Ultra-fast, spookily accurate text summarizer that works on any language

pithy 0.1.0 - an absurdly fast, strangely accurate, summariser Quick example: pithy -f your_file_here.txt --sentences 4 --help: Print this help messa

Catherine Koshka 13 Oct 31, 2022
Fast PDF password cracking utility equipped with commonly encountered password format builders and dictionary attacks.

PDFRip Fast PDF password cracking utility equipped with commonly encountered password format builders and dictionary attacks. ?? Table of Contents Int

Mufeed VH 226 Jan 4, 2023
🛥 Vaporetto is a fast and lightweight pointwise prediction based tokenizer. This is a Python wrapper for Vaporetto.

?? python-vaporetto ?? Vaporetto is a fast and lightweight pointwise prediction based tokenizer. This is a Python wrapper for Vaporetto. Installation

null 17 Dec 22, 2022
A lightning-fast Sanskrit toolkit. For Python bindings, see `vidyut-py`.

Vidyut मा भूदेवं क्षणमपि च ते विद्युता विप्रयोगः ॥ Vidyut is a lightning-fast toolkit for processing Sanskrit text. Vidyut aims to provide standard co

Ambuda 14 Dec 30, 2022
Rust-nlp is a library to use Natural Language Processing algorithm with RUST

nlp Rust-nlp Implemented algorithm Distance Levenshtein (Explanation) Jaro / Jaro-Winkler (Explanation) Phonetics Soundex (Explanation) Metaphone (Exp

Simon Paitrault 34 Dec 20, 2022
Elastic tabstops for Rust.

tabwriter is a crate that implements elastic tabstops. It provides both a library for wrapping Rust Writers and a small program that exposes the same

Andrew Gallant 212 Dec 16, 2022
An efficient and powerful Rust library for word wrapping text.

Textwrap Textwrap is a library for wrapping and indenting text. It is most often used by command-line programs to format dynamic output nicely so it l

Martin Geisler 322 Dec 26, 2022