An implementation of regular expressions for Rust. This implementation uses finite automata and guarantees linear time matching on all inputs.

Overview

regex

A Rust library for parsing, compiling, and executing regular expressions. Its syntax is similar to Perl-style regular expressions, but lacks a few features like look around and backreferences. In exchange, all searches execute in linear time with respect to the size of the regular expression and search text. Much of the syntax and implementation is inspired by RE2.

Build status Rust

Documentation

Module documentation with examples. The module documentation also includes a comprehensive description of the syntax supported.

Documentation with examples for the various matching functions and iterators can be found on the Regex type.

Usage

Add this to your Cargo.toml:

[dependencies]
regex = "1"

and this to your crate root (if you're using Rust 2015):

extern crate regex;

Here's a simple example that matches a date in YYYY-MM-DD format and prints the year, month and day:

use regex::Regex;

fn main() {
    let re = Regex::new(r"(?x)
(?P<year>\d{4})  # the year
-
(?P<month>\d{2}) # the month
-
(?P<day>\d{2})   # the day
").unwrap();
    let caps = re.captures("2010-03-14").unwrap();

    assert_eq!("2010", &caps["year"]);
    assert_eq!("03", &caps["month"]);
    assert_eq!("14", &caps["day"]);
}

If you have lots of dates in text that you'd like to iterate over, then it's easy to adapt the above example with an iterator:

use regex::Regex;

const TO_SEARCH: &'static str = "
On 2010-03-14, foo happened. On 2014-10-14, bar happened.
";

fn main() {
    let re = Regex::new(r"(\d{4})-(\d{2})-(\d{2})").unwrap();

    for caps in re.captures_iter(TO_SEARCH) {
        // Note that all of the unwraps are actually OK for this regex
        // because the only way for the regex to match is if all of the
        // capture groups match. This is not true in general though!
        println!("year: {}, month: {}, day: {}",
                 caps.get(1).unwrap().as_str(),
                 caps.get(2).unwrap().as_str(),
                 caps.get(3).unwrap().as_str());
    }
}

This example outputs:

year: 2010, month: 03, day: 14
year: 2014, month: 10, day: 14

Usage: Avoid compiling the same regex in a loop

It is an anti-pattern to compile the same regular expression in a loop since compilation is typically expensive. (It takes anywhere from a few microseconds to a few milliseconds depending on the size of the regex.) Not only is compilation itself expensive, but this also prevents optimizations that reuse allocations internally to the matching engines.

In Rust, it can sometimes be a pain to pass regular expressions around if they're used from inside a helper function. Instead, we recommend using the lazy_static crate to ensure that regular expressions are compiled exactly once.

For example:

use regex::Regex;

fn some_helper_function(text: &str) -> bool {
    lazy_static! {
        static ref RE: Regex = Regex::new("...").unwrap();
    }
    RE.is_match(text)
}

Specifically, in this example, the regex will be compiled when it is used for the first time. On subsequent uses, it will reuse the previous compilation.

Usage: match regular expressions on &[u8]

The main API of this crate (regex::Regex) requires the caller to pass a &str for searching. In Rust, an &str is required to be valid UTF-8, which means the main API can't be used for searching arbitrary bytes.

To match on arbitrary bytes, use the regex::bytes::Regex API. The API is identical to the main API, except that it takes an &[u8] to search on instead of an &str. By default, . will match any byte using regex::bytes::Regex, while . will match any UTF-8 encoded Unicode scalar value using the main API.

This example shows how to find all null-terminated strings in a slice of bytes:

use regex::bytes::Regex;

let re = Regex::new(r"(?P<cstr>[^\x00]+)\x00").unwrap();
let text = b"foo\x00bar\x00baz\x00";

// Extract all of the strings without the null terminator from each match.
// The unwrap is OK here since a match requires the `cstr` capture to match.
let cstrs: Vec<&[u8]> =
    re.captures_iter(text)
      .map(|c| c.name("cstr").unwrap().as_bytes())
      .collect();
assert_eq!(vec![&b"foo"[..], &b"bar"[..], &b"baz"[..]], cstrs);

Notice here that the [^\x00]+ will match any byte except for NUL. When using the main API, [^\x00]+ would instead match any valid UTF-8 sequence except for NUL.

Usage: match multiple regular expressions simultaneously

This demonstrates how to use a RegexSet to match multiple (possibly overlapping) regular expressions in a single scan of the search text:

use regex::RegexSet;

let set = RegexSet::new(&[
    r"\w+",
    r"\d+",
    r"\pL+",
    r"foo",
    r"bar",
    r"barfoo",
    r"foobar",
]).unwrap();

// Iterate over and collect all of the matches.
let matches: Vec<_> = set.matches("foobar").into_iter().collect();
assert_eq!(matches, vec![0, 2, 3, 4, 6]);

// You can also test whether a particular regex matched:
let matches = set.matches("foobar");
assert!(!matches.matched(5));
assert!(matches.matched(6));

Usage: enable SIMD optimizations

SIMD optimizations are enabled automatically on Rust stable 1.27 and newer. For nightly versions of Rust, this requires a recent version with the SIMD features stabilized.

Usage: a regular expression parser

This repository contains a crate that provides a well tested regular expression parser, abstract syntax and a high-level intermediate representation for convenient analysis. It provides no facilities for compilation or execution. This may be useful if you're implementing your own regex engine or otherwise need to do analysis on the syntax of a regular expression. It is otherwise not recommended for general use.

Documentation regex-syntax.

Crate features

This crate comes with several features that permit tweaking the trade off between binary size, compilation time and runtime performance. Users of this crate can selectively disable Unicode tables, or choose from a variety of optimizations performed by this crate to disable.

When all of these features are disabled, runtime match performance may be much worse, but if you're matching on short strings, or if high performance isn't necessary, then such a configuration is perfectly serviceable. To disable all such features, use the following Cargo.toml dependency configuration:

[dependencies.regex]
version = "1.3"
default-features = false
# regex currently requires the standard library, you must re-enable it.
features = ["std"]

This will reduce the dependency tree of regex down to a single crate (regex-syntax).

The full set of features one can disable are in the "Crate features" section of the documentation.

Minimum Rust version policy

This crate's minimum supported rustc version is 1.28.0.

The current tentative policy is that the minimum Rust version required to use this crate can be increased in minor version updates. For example, if regex 1.0 requires Rust 1.20.0, then regex 1.0.z for all values of z will also require Rust 1.20.0 or newer. However, regex 1.y for y > 0 may require a newer minimum version of Rust.

In general, this crate will be conservative with respect to the minimum supported version of Rust.

License

This project is licensed under either of

at your option.

The data in regex-syntax/src/unicode_tables/ is licensed under the Unicode License Agreement (LICENSE-UNICODE).

Comments
  • implement matching on byte strings

    implement matching on byte strings

    Currently as far as I see regexes here are only for Unicode text. But they can be used to parse binary files as well (to to parse a mixture of binary and text).

    For example, how can one implement strings --all using regexes in Rust?

    help wanted question 
    opened by vi 91
  • implement a DFA matcher

    implement a DFA matcher

    One of the reasons why RE2/C++ is so fast is because it has two implementations of regex matching: a limited DFA matcher (no sub-capture support) and a full NFA simulation. This crate has the latter, but not the former.

    Adding a DFA matcher should be an implementation detail and shouldn't require any public facing changes.

    This is a pretty involved project. I hope to find time to do this some day, but if someone else wants to tackle it, I'd be happy to help mentor it. (Part of this will be figuring out how to handle the regex! macro. Do we replicate the DFA there too like we do the NFA?)

    enhancement 
    opened by BurntSushi 36
  • thread local cache may cause memory leaks

    thread local cache may cause memory leaks

    The thread local cache causes a memory leak if you use the regex library in environments where you don't have direct control over threads that can be created and destroyed at the platform's whims, such as when executing tasks inside Microsoft IIS. Setting the dfa_size_limit to 0 does not eliminate the cache. Eliminating the cache should also reduce the memory footprint when you have lots of regexes in lots of threads.

    Is it possible to make the thread_local cache optional?

    doc 
    opened by saarw 32
  • perf: Improve dfa matching

    perf: Improve dfa matching

    Various improvements from profiling RegexSet::matches usage.

     name                old ns/iter    new ns/iter    diff ns/iter   diff %  speedup 
     misc::is_match_set  73 (342 MB/s)  76 (328 MB/s)             3    4.11%   x 0.96 
     misc::matches_set   857 (29 MB/s)  604 (41 MB/s)          -253  -29.52%   x 1.42 
    
    opened by Marwes 28
  • Add a onepass dfa matcher.

    Add a onepass dfa matcher.

    This PR adds the onepass DFA that I've been working on. I have marked it as a WIP because there is still one failing test case that I need to dig my teeth into.

    Currently the only way to active the onepass matcher is to call .onepass() on ExecBuilder. I was hoping to get this merged in without having it enabled by default at first, then I can do a seperate PR which adds more testing/benchmarking and actually enables it. That approach also provides more time to discuss how exactly the onepass matcher will get used by the execution planner. My guess is that the onepass matcher should always be used when it can be, but it may turn out that shoveling capture groups around is pricier than the DFA startup cost.

    We talked about test coverage over in #68, so I did some simple checks and found that about 300/700 of the tests in the test suite trigger the onepass matcher.

    Closes #68

    opened by ethanpailes 28
  • Add a lazy DFA.

    Add a lazy DFA.

    A lazy DFA is much faster than executing an NFA because it doesn't repeat the work of following epsilon transitions over and and over. Instead, it computes states during search and caches them for reuse. We avoid exponential state blow up by bounding the cache in size. When the DFA isn't powerful enough to fulfill the caller's request (e.g., return sub-capture locations), it still runs to find the boundaries of the match and then falls back to NFA execution on the matched region. The lazy DFA can otherwise execute on every regular expression except for regular expressions that contain word boundary assertions (\b or \B). (They are tricky to implement in the lazy DFA because they are Unicode aware and therefore require multi-byte look-behind/ahead.) The implementation in this PR is based on the implementation in Google's RE2 library.

    Adding a lazy DFA was a substantial change and required several modifications:

    1. The compiler can now produce both Unicode based programs (still used by the NFA engines) and byte based programs (required by the lazy DFA, but possible to use in the NFA engines too). In byte based programs, UTF-8 decoding is built into the automaton.
    2. A new Exec type was introduced to implement the logic for compiling and choosing the right engine to use on each search.
    3. Prefix literal detection was rewritten to work on bytes.
    4. Benchmarks were overhauled and new ones were added to more carefully track the impact of various optimizations.
    5. A new HACKING.md guide has been added that gives a high-level design overview of this crate.

    Other changes in this commit include:

    1. Protection against stack overflows. All places that once required recursion have now either acquired a bound or have been converted to using a stack on the heap.
    2. Update the Aho-Corasick dependency, which includes memchr2 and memchr3 optimizations.
    3. Add PCRE benchmarks using the Rust pcre bindings.

    Closes #66, #146.

    opened by BurntSushi 24
  • Changed quote to escape

    Changed quote to escape

    Escape is a much more familiar name to this type of operation.

    The function will now also only allocate a String when there are actual meta characters to escape, rather than allocating on every call.

    opened by XAMPPRocky 20
  • Regex::must(...) to replace Regex::new(...).unwrap()?

    Regex::must(...) to replace Regex::new(...).unwrap()?

    This issue is intended to brainstorm some ideas on how to report regex parse errors to Rust programmers.

    One of my long running projects has been a rewrite of the regex-syntax crate. There are several goals I'm achieving via this rewrite, and one of them in particular is better error messages for regex parse errors.

    Here's an example:

    errors

    Here's another example:

    errors2

    The above error messages correspond to the fmt::Display implementation on the error returned by the parser. What I'd like to have happen is for these error messages to appear to programmers when they create malformed regexes in their source code. Specifically, today, the code pattern for building a regex that is known at compile time is:

    let re = Regex::new(...).unwrap();
    

    The issue here is that if the regex contains a parse error, then the unwrap implementation will show the Debug message for the error instead of the Display implementation. That's definitely the right behavior in most cases, but in this case, I'd really like to show a nicer error message. My question is: how can I achieve that?

    I can think of two ways:

    1. Make the Debug implementation defer to the Display implementation. This causes unwrap to show the right error message, but now we don't have a "normal" Debug implementation, which feels kind of wrong to me.
    2. Create a new constructor on regex called must (name subject to bikeshedding) that is basically the same as Regex::new(...).unwrap(), except it will emit the Display impl for the error instead of the Debug impl. The downside here is that users of the regex crate need to use must, and this particular style of handling panics isn't really idiomatic in the Rust ecosystem, where we instead prefer an explicit unwrap. The key difference here is that regexes in the source code with parse errors are inherently programmer errors, and programmers should get nice regex error messages, because the Debug impl isn't going to be particularly helpful.

    Are there other ways of approaching this problem? What do other people think? My inclination right now is hovering around (2), particularly since Regex::new(...).unwrap() is not only extraordinarily common, but is also correct in the vast majority of cases (with it being incorrect only when the regex isn't known until runtime). That is, normalizing that code pattern with an explicit constructor feels like a good thing to me on its own, but it's not clear how the balances with established idioms.

    cc @rust-lang/libs @ethanpailes @robinst @killercup @steveklabnik @bluss

    opened by BurntSushi 19
  • implement a faster memory pool

    implement a faster memory pool

    Problem

    When a regex search executes, it has to choose a matching engine (sometimes more than one) to carry out the search. Each matching engine needs some amount of fixed mutable space on the heap to carry out a search, which I'll call "scratch space." In general, this space is reusable and reusing the space leads to significant performance benefits when using a regular expression to carry out multiple searches. (For example, the scratch space may contain computed DFA states.) Scratch space is used every time a regular expression executes a search. For example, calling re.find_iter("...") will execute possibly many searches, depending on how many matches it finds.

    Here are some constraints I've been working with:

    1. A Regex must be Send and Sync. This permits one to share a regex across multiple threads without any external synchronization.
    2. The regex crate should never spawn a thread.
    3. It's OK to have multiple copies of the scratch space. In other words, it's OK to use a little bit more memory to reduce contention.
    4. Using a regex in a single thread is far more prevalent than sharing a regex across multiple threads, so it is OK to optimize for the single threaded use case.

    Constraint (1) is the killer, because it means synchronizing concurrent access to mutable state. For example, one might have a Arc<Regex> where the Regex is used simultaneously among multiple threads. If we gave up on the Sync bound, then callers would need to either Clone regular expressions to use them across multiple threads, or put them in a Mutex. The latter is a bit surprising since Regex doesn't have any public mutable methods. The latter is also very poor performance wise because one search will completely block all other searches. The former, cloning, is somewhat acceptable if a little wasteful. That is, if we dropped the Sync bound, I'd expect users to clone regular expressions if they want to use them across multiple threads.

    Constraint (2) just seems like good sense. To me, a thread spawned as a result of running a regex violates the principle of least surprise.

    Constraint (3) permits the implementation to be simple and should make contention be a non-factor. If we were more memory conscious, we'd never copy the scratch space, which would mean that each of the regex engines would be forced to do their own type of synchronization. Not only is this much more complex, but it means contention could be a real problem while searching, which seems unfortunate.

    Given all of this, it is my belief that the key thing worth optimizing is the overhead of synchronization itself. The cost of copying the scratch space should be amortized through reuse, and contention should be extremely limited since synchronization only needs to occur at the start and end of every search. (I suppose contention could become an issue if a regex that matches very often on very short spans is used simultaneously across multiple threads. The good thing about this is that the caller could work around this by simply cloning the regex, avoiding contention altogether.)

    Current solution

    The most straight-forward solution to this problem is to wrap some collection data structure in a Mutex. The data structure could trivially be a queue, stack or (singly) linked list. The current implementation is so simple that it can be digested in a quick skim. In particular, the only operations we need to support are get and pop. No ordering invariants are necessary since all copies of scratch space are equally usable for every regex search.

    Benchmarks

    I have three sets of benchmarks representing three pretty easy-to-implement strategies. The first is the base line, which uses a RefCell<Vec<T>>. This obviously does no synchronization, so in this world, Regex doesn't impl Sync. The second is the current solution, which uses Mutex<Vec<T>>. The third uses a lock free stack from crossbeam, which uses TreiberStack<T>.

    Here is a comparison between the base line and Mutex<Vec<T>>, showing only benchmarks with more than 10% difference (positive percent indicates how much slower mutexes are than refcells in this case):

    $ cargo-benchcmp rust-refcell.5 rust-mutex.5 --threshold 10
    name                                rust-refcell.5 ns/iter  rust-mutex.5 ns/iter    diff ns/iter  diff %
    misc::anchored_literal_long_match   49 (7,959 MB/s)         79 (4,936 MB/s)                   30  61.22%
    misc::anchored_literal_short_match  48 (541 MB/s)           77 (337 MB/s)                     29  60.42%
    misc::easy0_1K                      226 (4,650 MB/s)        275 (3,821 MB/s)                  49  21.68%
    misc::easy0_32                      182 (324 MB/s)          225 (262 MB/s)                    43  23.63%
    misc::easy1_1K                      257 (4,062 MB/s)        298 (3,503 MB/s)                  41  15.95%
    misc::easy1_32                      160 (325 MB/s)          193 (269 MB/s)                    33  20.62%
    misc::medium_32                     255 (235 MB/s)          289 (207 MB/s)                    34  13.33%
    misc::one_pass_long_prefix          182 (142 MB/s)          218 (119 MB/s)                    36  19.78%
    misc::one_pass_long_prefix_not      154 (168 MB/s)          189 (137 MB/s)                    35  22.73%
    misc::one_pass_short                112 (151 MB/s)          147 (115 MB/s)                    35  31.25%
    misc::one_pass_short_not            112 (151 MB/s)          146 (116 MB/s)                    34  30.36%
    sherlock::everything_greedy         6,838,126 (87 MB/s)     8,261,531 (72 MB/s)        1,423,405  20.82%
    sherlock::everything_greedy_nl      5,502,273 (108 MB/s)    6,761,252 (87 MB/s)        1,258,979  22.88%
    sherlock::letters                   34,884,851 (17 MB/s)    65,399,440 (9 MB/s)       30,514,589  87.47%
    sherlock::letters_lower             33,173,868 (17 MB/s)    60,289,303 (9 MB/s)       27,115,435  81.74%
    sherlock::letters_upper             3,787,833 (157 MB/s)    4,650,326 (127 MB/s)         862,493  22.77%
    sherlock::name_alt4                 257,680 (2,308 MB/s)    295,505 (2,013 MB/s)          37,825  14.68%
    sherlock::name_whitespace           53,913 (11,035 MB/s)    60,149 (9,890 MB/s)            6,236  11.57%
    sherlock::the_whitespace            1,634,006 (364 MB/s)    2,056,631 (289 MB/s)         422,625  25.86%
    sherlock::words                     12,970,070 (45 MB/s)    20,320,171 (29 MB/s)       7,350,101  56.67%
    

    And here is a comparison between the base line and TreiberStack<T>:

    $ cargo-benchcmp rust-refcell.5 rust-treiber.5 --threshold 10
    name                                rust-refcell.5 ns/iter  rust-treiber.5 ns/iter    diff ns/iter   diff %
    misc::anchored_literal_long_match   49 (7,959 MB/s)         129 (3,023 MB/s)                    80  163.27%
    misc::anchored_literal_short_match  48 (541 MB/s)           129 (201 MB/s)                      81  168.75%
    misc::easy0_1K                      226 (4,650 MB/s)        319 (3,294 MB/s)                    93   41.15%
    misc::easy0_32                      182 (324 MB/s)          268 (220 MB/s)                      86   47.25%
    misc::easy1_1K                      257 (4,062 MB/s)        342 (3,052 MB/s)                    85   33.07%
    misc::easy1_32                      160 (325 MB/s)          241 (215 MB/s)                      81   50.62%
    misc::hard_32                       307 (192 MB/s)          382 (154 MB/s)                      75   24.43%
    misc::medium_1K                     609 (1,727 MB/s)        695 (1,513 MB/s)                    86   14.12%
    misc::medium_32                     255 (235 MB/s)          339 (176 MB/s)                      84   32.94%
    misc::no_exponential                485 (206 MB/s)          576 (173 MB/s)                      91   18.76%
    misc::not_literal                   271 (188 MB/s)          350 (145 MB/s)                      79   29.15%
    misc::one_pass_long_prefix          182 (142 MB/s)          268 (97 MB/s)                       86   47.25%
    misc::one_pass_long_prefix_not      154 (168 MB/s)          234 (111 MB/s)                      80   51.95%
    misc::one_pass_short                112 (151 MB/s)          191 (89 MB/s)                       79   70.54%
    misc::one_pass_short_not            112 (151 MB/s)          190 (89 MB/s)                       78   69.64%
    sherlock::everything_greedy         6,838,126 (87 MB/s)     10,548,218 (56 MB/s)         3,710,092   54.26%
    sherlock::ing_suffix                3,027,510 (196 MB/s)    3,548,306 (167 MB/s)           520,796   17.20%
    sherlock::ing_suffix_limited_space  2,965,394 (200 MB/s)    3,444,708 (172 MB/s)           479,314   16.16%
    sherlock::letters                   34,884,851 (17 MB/s)    109,569,267 (5 MB/s)        74,684,416  214.09%
    sherlock::letters_lower             33,173,868 (17 MB/s)    106,638,088 (5 MB/s)        73,464,220  221.45%
    sherlock::letters_upper             3,787,833 (157 MB/s)    6,230,733 (95 MB/s)          2,442,900   64.49%
    sherlock::name_alt4                 257,680 (2,308 MB/s)    363,242 (1,637 MB/s)           105,562   40.97%
    sherlock::name_whitespace           53,913 (11,035 MB/s)    71,406 (8,331 MB/s)             17,493   32.45%
    sherlock::quotes                    947,223 (628 MB/s)      1,065,966 (558 MB/s)           118,743   12.54%
    sherlock::the_whitespace            1,634,006 (364 MB/s)    2,651,996 (224 MB/s)         1,017,990   62.30%
    sherlock::words                     12,970,070 (45 MB/s)    32,089,109 (18 MB/s)        19,119,039  147.41%
    

    I note that the comparisons above seem entirely expected to me. Outside of noise, synchronization makes matching universally slower. Moreover, the benchmarks that actually show up in the list (which is a subset of all benchmarks) correspond to benchmarks with many searches over short haystacks, or many short matches over long haystacks. This is exactly the case where constant overhead will make a difference.

    And, to make it easier to read, a comparison between Mutex<Vec<T>> and TreiberStack<T>:

    $ cargo-benchcmp rust-mutex.5 rust-treiber.5 --threshold 10
    name                                rust-mutex.5 ns/iter  rust-treiber.5 ns/iter    diff ns/iter   diff %
    misc::anchored_literal_long_match   79 (4,936 MB/s)       129 (3,023 MB/s)                    50   63.29%
    misc::anchored_literal_short_match  77 (337 MB/s)         129 (201 MB/s)                      52   67.53%
    misc::easy0_1K                      275 (3,821 MB/s)      319 (3,294 MB/s)                    44   16.00%
    misc::easy0_32                      225 (262 MB/s)        268 (220 MB/s)                      43   19.11%
    misc::easy1_1K                      298 (3,503 MB/s)      342 (3,052 MB/s)                    44   14.77%
    misc::easy1_32                      193 (269 MB/s)        241 (215 MB/s)                      48   24.87%
    misc::hard_32                       334 (176 MB/s)        382 (154 MB/s)                      48   14.37%
    misc::medium_32                     289 (207 MB/s)        339 (176 MB/s)                      50   17.30%
    misc::no_exponential                520 (192 MB/s)        576 (173 MB/s)                      56   10.77%
    misc::not_literal                   297 (171 MB/s)        350 (145 MB/s)                      53   17.85%
    misc::one_pass_long_prefix          218 (119 MB/s)        268 (97 MB/s)                       50   22.94%
    misc::one_pass_long_prefix_not      189 (137 MB/s)        234 (111 MB/s)                      45   23.81%
    misc::one_pass_short                147 (115 MB/s)        191 (89 MB/s)                       44   29.93%
    misc::one_pass_short_not            146 (116 MB/s)        190 (89 MB/s)                       44   30.14%
    sherlock::everything_greedy         8,261,531 (72 MB/s)   10,548,218 (56 MB/s)         2,286,687   27.68%
    sherlock::everything_greedy_nl      6,761,252 (87 MB/s)   5,485,594 (108 MB/s)        -1,275,658  -18.87%
    sherlock::ing_suffix                3,210,605 (185 MB/s)  3,548,306 (167 MB/s)           337,701   10.52%
    sherlock::ing_suffix_limited_space  3,117,710 (190 MB/s)  3,444,708 (172 MB/s)           326,998   10.49%
    sherlock::letters                   65,399,440 (9 MB/s)   109,569,267 (5 MB/s)        44,169,827   67.54%
    sherlock::letters_lower             60,289,303 (9 MB/s)   106,638,088 (5 MB/s)        46,348,785   76.88%
    sherlock::letters_upper             4,650,326 (127 MB/s)  6,230,733 (95 MB/s)          1,580,407   33.98%
    sherlock::name_alt4                 295,505 (2,013 MB/s)  363,242 (1,637 MB/s)            67,737   22.92%
    sherlock::name_whitespace           60,149 (9,890 MB/s)   71,406 (8,331 MB/s)             11,257   18.72%
    sherlock::the_whitespace            2,056,631 (289 MB/s)  2,651,996 (224 MB/s)           595,365   28.95%
    sherlock::words                     20,320,171 (29 MB/s)  32,089,109 (18 MB/s)        11,768,938   57.92%
    

    I admit that I found it somewhat surprising that a lock free stack was being beaten by a mutex, but this is probably definitely due to my complete ignorance of lock free algorithms. (Hence, why I'm writing this ticket.)

    Other things

    When using a TreiberStack<T>, perf top reports mem::epoch::participant::Participant::enter as a potential hotspot.

    When using a Mutex<Vec<T>>, perf top reports pthread_mutex_lock and __pthread_mutex_unlock_usercnt as potential hot spots.

    In both cases, other hotspots also of course appear, such as methods in the DFA, memchr, etc...

    Another thing I've noticed in profiling is how much time is being spent in the Drop impl for PoolGuard. I tracked this down to time spent in memcpy, so I moved the representation of scratch spaces to a Box, which definitely helped, especially for the DFA whose cache struct isn't tiny. I'm not sure what can be done about this though.

    ???

    So what can we do here? Is a TreiberStack in crossbeam the best lock free algorithm we can use? Does crossbeam have overhead that we could eliminate with a custom implementation? Other ideas?

    Scope

    In my view, the overhead of synchronization is holding us back from being more competitive with PCRE on very short matches/haystacks.

    enhancement help wanted question 
    opened by BurntSushi 19
  • Does not support (?!...) negative lookahead assertion?

    Does not support (?!...) negative lookahead assertion?

    Python re module supports (?!...) syntax, see https://docs.python.org/2/library/re.html#regular-expression-syntax

    The code below compiles but paniced at runtime:

    extern crate regex;
    
    fn main() {
        let re = regex::Regex::new(r"Isaac (?!Asimov)").unwrap();
        println!("{}", re.is_match("Isaac "));
        println!("{}", re.is_match("Isaac Asimov"));
    }
    
    thread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value:
    Syntax(Error { pos: 8, surround: "ac (?!Asim", kind: UnrecognizedFlag('!') })', 
    ../src/libcore/result.rs:738
    

    The pattern works fine in Python:

    import re
    
    pt = re.compile(r"Isaac (?!Asimov)")
    pt.match("Isaac ")
    pt.match("Isaac Asimov")
    
    opened by messense 18
  • Performance issue of defined expressions

    Performance issue of defined expressions

    I'm working on a comparison of different regex engines including your crate. I discovered some expressions with a slow execution time compared to other engines. I used the test tool from rust-leipzig/regex-performance to measure the performance on a given input file.

    The following chart tries to give an overview of the results: rust_regex

    The affected expressions are:

    1. [a-q][^u-z]{13}x
    2. ∞|✓
    3. (?i)Twain

    Are there any issues or limitation regarding the given expressions known?

    Additionally I attached my results: results.zip

    opened by BernhardtD 17
  • API to get size of compiled regex

    API to get size of compiled regex

    As an extension of the RegexBuilder::size_limit function, it might be useful to know the actual size taken by a compiled regular expression. A possible use for this is for when you want to process a number of untrusted regular expressions, and be assured that collectively they don't exceed a limit.

    I propose that the size of the compiled program (the same one used when checking against the size_limit of the builder) be stored inside a regular expression struct, and be accessible through a new function Regex::approximate_size.

    opened by bentheiii 4
  • sharing one regex across many threads can lead to big slowdowns due to mutex contention

    sharing one regex across many threads can lead to big slowdowns due to mutex contention

    To reproduce, create a Cargo.toml with this:

    [package]
    name = "regex-contention-repro-work"
    version = "0.1.0"
    edition = "2021"
    
    [[bin]]
    name = "repro"
    path = "main.rs"
    
    [dependencies]
    anyhow = "1.0.66"
    regex = "1.7.0"
    

    And in the same directory, create a main.rs containing:

    use std::{
        io::Write,
        sync::Arc,
        time::{Duration, Instant},
    };
    
    use regex::{Regex, RegexBuilder};
    
    const ITERS: usize = 100_000;
    const PATTERN: &str = "";
    const HAYSTACK: &str = "ZQZQZQZQ";
    
    #[derive(Debug)]
    struct Benchmark {
        re: Regex,
        threads: u32,
    }
    
    impl Benchmark {
        fn cloned(&self) -> anyhow::Result<Duration> {
            let start = Instant::now();
            let mut handles = vec![];
            for _ in 0..self.threads {
                // When we clone the regex like this, it does NOT make a complete
                // copy of all of its internal state, but it does create an entirely
                // fresh pool from which to get mutable scratch space for each
                // search. Basically, a 'Regex' internally looks like this:
                //
                //   struct Regex {
                //     // Among other things, this contains the literal
                //     // prefilters and the Thompson VM bytecode
                //     // instructions.
                //     read_only: Arc<ReadOnly>,
                //     // Contains space used by the regex matcher
                //     // during search time. e.g., The DFA transition
                //     // table for the lazy DFA or the set of active
                //     // threads for the Thompson NFA simulation.
                //     pool: Pool<ScratchSpace>,
                //   }
                //
                // That is, a regex already internally uses reference counting,
                // so cloning it does not create an entirely separate copy of the
                // data. It's effectively free. However, cloning it does create
                // an entirely fresh 'Pool'. It specifically does not reuse pools
                // across cloned regexes, and it does this specifically so that
                // callers have a path that permits them to opt out of contention
                // on the pool.
                //
                // Namely, when a fresh pool is created, it activates a special
                // optimization for the first thread that accesses the pool. For
                // that thread gets access to a special value ONLY accessible to
                // that thread, where as all other threads accessing the pool get
                // sent through the "slow" path via a mutex. When a lot of threads
                // share the same regex **with the same pool**, this mutex comes
                // under very heavy contention.
                //
                // It is worth pointing out that the mutex is NOT held for the
                // duration of the search. Effectively what happens is:
                //
                //   is "first" thread optimization active?
                //   NO: mutex lock
                //       pop pointer out of the pool
                //       mutex unlock
                //   do a search
                //   is "first" thread optimization active?
                //   NO: mutex lock
                //       push pointer back into pool
                //       mutex unlock
                //
                // So in the case where "do a search" is extremely fast, i.e., when
                // the haystack is tiny, as in this case, the mutex contention ends
                // up dominating the runtime. As the number of threads increases,
                // the contention gets worse and worse and thus runtime blows up.
                //
                // But, all of that contention can be avoided by giving each thread
                // a fresh regex and thus each one gets its own pool and each
                // thread gets the "first" thread optimization applied. So the
                // internal access for the mutable scratch space now looks like
                // this:
                //
                //   is "first" thread optimization active?
                //   YES: return pointer to special mutable scratch space
                //   do a search
                //   is "first" thread optimization active?
                //   YES: do nothing
                //
                // So how to fix this? Well, it's kind of hard. The regex crate
                // used to use the 'thread_local' crate that optimized this
                // particular access pattern and essentially kept a hash table
                // keyed on thread ID. But this led to other issues. Specifically,
                // its memory usage scaled with the number of active threads using
                // a regex, where as the current approach scales with the number of
                // active threads *simultaneously* using a regex.
                //
                // I am not an expert on concurrent data structures though, so
                // there is likely a better approach. But the idea here is indeed
                // to make it possible to opt out of contention by being able to
                // clone the regex. Once you do that, there are **zero** competing
                // resources between the threads.
                //
                // Why not just do this in all cases? Well, I guess I would if I
                // could, but I don't know how. The reason why explicit cloning
                // permits one to opt out is that each thread is handed its own
                // copy of the regex and its own pool, and that is specifically
                // controlled by the caller. I'm not sure how to do that from
                // within the regex library itself, since it isn't really aware of
                // threads per se.
                let re = self.re.clone();
                handles.push(std::thread::spawn(move || {
                    let mut matched = 0;
                    for _ in 0..ITERS {
                        if re.is_match(HAYSTACK) {
                            matched += 1;
                        }
                    }
                    matched
                }));
            }
            let mut matched = 0;
            for h in handles {
                matched += h.join().unwrap();
            }
            assert!(matched > 0);
            Ok(Instant::now().duration_since(start))
        }
    
        fn shared(&self) -> anyhow::Result<Duration> {
            let start = Instant::now();
            let mut handles = vec![];
            // We clone the regex into an Arc but then share it across all threads.
            // Each thread in turn competes with the single regex's shared memory
            // pool for mutable scratch space to use during a search. This is what
            // ultimately caused this 'shared' benchmark to be much slower than the
            // 'cloned' benchmark when run with many threads. Indeed, profiling it
            // reveals that most of the time is spent in the regex internal 'Pool'
            // type's 'get' and 'get_slow' methods.
            let re = Arc::new(self.re.clone());
            for _ in 0..self.threads {
                let re = Arc::clone(&re);
                handles.push(std::thread::spawn(move || {
                    let mut matched = 0;
                    for _ in 0..ITERS {
                        if re.is_match(HAYSTACK) {
                            matched += 1;
                        }
                    }
                    matched
                }));
            }
            let mut matched = 0;
            for h in handles {
                matched += h.join().unwrap();
            }
            assert!(matched > 0);
            Ok(Instant::now().duration_since(start))
        }
    }
    
    fn main() -> anyhow::Result<()> {
        let threads: u32 = std::env::var("REGEX_BENCH_THREADS")?.parse()?;
        let re = RegexBuilder::new(PATTERN)
            .unicode(false)
            .dfa_size_limit(50 * (1 << 20))
            .build()?;
        let benchmark = Benchmark { re, threads };
        let which = std::env::var("REGEX_BENCH_WHICH")?;
        let duration = match &*which {
            "cloned" => benchmark.cloned(),
            "shared" => benchmark.shared(),
            unknown => anyhow::bail!("unrecognized REGEX_BENCH_WHICH={}", unknown),
        };
        writeln!(std::io::stdout(), "{:?}", duration)?;
        Ok(())
    }
    

    Now build and run the benchmark:

    $ cargo build --release 
    $ hyperfine "REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=16 ./target/release/repro" "REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=16 ./target/release/repro"
    Benchmark 1: REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=16 ./target/release/repro
      Time (mean ± σ):       6.5 ms ±   1.2 ms    [User: 55.1 ms, System: 3.8 ms]
      Range (min … max):     0.1 ms …  10.5 ms    254 runs
     
      Warning: Command took less than 5 ms to complete. Note that the results might be inaccurate because hyperfine can not calibrate the shell startup time much more precise than this limit. You can try to use the `-N`/`--shell=none` option to disable the shell completely.
     
    Benchmark 2: REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=16 ./target/release/repro
      Time (mean ± σ):     530.5 ms ±  12.6 ms    [User: 1886.3 ms, System: 4994.7 ms]
      Range (min … max):   514.2 ms … 552.4 ms    10 runs
     
    Summary
      'REGEX_BENCH_WHICH=cloned REGEX_BENCH_THREADS=16 ./target/release/repro' ran
       81.66 ± 15.52 times faster than 'REGEX_BENCH_WHICH=shared REGEX_BENCH_THREADS=16 ./target/release/repro'
    

    As noted in the comments in the code above, the only difference between these two benchmarks is that cloned creates a fresh Regex for each thread where as shared uses the same Regex across multiple threads. This leads to contention on a single Mutex in the latter case and overall very poor performance. (See the code comments above for more info.)

    We can confirm this by looking at a profile. Here is a screenshot from perf for the cloned benchmark:

    cloned

    And now for the shared benchmark:

    shared

    As we can see, in the shared benchmark, virtually all of the time is being spent locking and unlocking the mutex.

    bug 
    opened by BurntSushi 5
  • Unexpected match with end anchor in group

    Unexpected match with end anchor in group

    What version of regex are you using?

    1.6.0

    Describe the bug at a high level.

    The regex (a$)b$ matches on input ab which is not expected.

    What are the steps to reproduce the behavior?

    fn main() {
        let regex = regex::Regex::new("(a$)b$").unwrap();
        println!("match: {:?}", regex.find("ab"));
        println!("captures: {:?}", regex.captures("ab"));
    }
    

    What is the actual behavior?

    This prints:

    match: Some(Match { text: "ab", start: 0, end: 2 })
    captures: None
    

    What is the expected behavior?

    The regex should not match. The capture call gives the expected result, but not the find call.

    a$b$ and (a$)b display the expected behavior, so this seems to be a weird corner case.

    My first hypothesis was that it is caused by an invalid suffix extraction:

    $ regex-debug prefixes '(a$)b$'
    Cut(a)
    $ regex-debug suffixes '(a$)b$'
    Complete(ab)
    $ regex-debug prefixes 'a$b$'
    Cut(a)
    $ regex-debug suffixes 'a$b$'
    Cut(b)
    

    But the regex (a$)b does not have the bug, and yet has the same suffix extraction:

    $ regex-debug suffixes '(a$)b'
    Complete(ab)
    

    So i'm not sure it is related to this.

    bug fix-incoming 
    opened by vthib 1
  • Compile-time regex for smaller WASM binary size

    Compile-time regex for smaller WASM binary size

    I would like to once again raise the question of compile-time regular expressions. They were once provided by regex-macros crate, but are no longer supported.

    If I understand correctly, compiled-time regular expressions were deems unnecessary because their primary use-case was compile-time pattern verification, which can now be achieved by other means, like invalid_regex Clippy lint or lazy-regex crate.

    But there is another reason one might want to avoid including a regex compiler into the application, and that is binary size. This becomes especially important with WASM targets. The increase in WASM binary size from the regex crate is very noticeable. In my experiments it varies from ~200 KiB to ~600 KiB depending on the features. I made a small demo, check it out to see for yourself: https://github.com/amatveiakin/regex-wasm-demo

    Even 200 KiB is a lot for a web app, and should best be avoided. And I don't think there is a better way to do this than building the regex at compile-time. What do you think?

    question 
    opened by amatveiakin 8
  • Added `participating_captures_len`

    Added `participating_captures_len`

    Long overdue, but this is the first step towards adding https://github.com/rust-lang/regex/issues/824, as greenlit by @BurntSushi.

    This new method of Regex returns the number of capture groups that will be filled during a successful match, or None if this can't be statically determined during Regex compile time.

    opened by orlp 5
Releases(1.0.0)
  • 1.0.0(May 1, 2018)

    This release marks the 1.0 release of regex.

    While this release includes some breaking changes, most users of older versions of the regex library should be able to migrate to 1.0 by simply bumping the version number. The important changes are as follows:

    • We adopt Rust 1.20 as the new minimum supported version of Rust for regex. We also tentativley adopt a policy that permits bumping the minimum supported version of Rust in minor version releases of regex, but no patch releases. That is, with respect to semver, we do not strictly consider bumping the minimum version of Rust to be a breaking change, but adopt a conservative stance as a compromise.
    • Octal syntax in regular expressions has been disabled by default. This permits better error messages that inform users that backreferences aren't available. Octal syntax can be re-enabled via the corresponding option on RegexBuilder.
    • (?-u:\B) is no longer allowed in Unicode regexes since it can match at invalid UTF-8 code unit boundaries. (?-u:\b) is still allowed in Unicode regexes.
    • The From<regex_syntax::Error> impl has been removed. This formally removes the public dependency on regex-syntax.
    • A new feature, use_std, has been added and enabled by default. Disabling the feature will result in a compilation error. In the future, this may permit us to support no_std environments (w/ alloc) in a backwards compatible way.

    For more information and discussion, please see 1.0 release tracking issue.

    Source code(tar.gz)
    Source code(zip)
  • 0.2.7(Mar 8, 2018)

    This release includes a ground-up rewrite of the regex-syntax crate, which has been in development for over a year.

    New features:

    • Error messages for invalid regexes have been greatly improved. You get these automatically; you don't need to do anything. In addition to better formatting, error messages will now explicitly call out the use of look around. When regex 1.0 is released, this will happen for backreferences as well.
    • Full support for intersection, difference and symmetric difference of character classes. These can be used via the &&, -- and ~~ binary operators within classes.
    • A Unicode Level 1 conformat implementation of \p{..} character classes. Things like \p{scx:Hira}, \p{age:3.2} or \p{Changes_When_Casefolded} now work. All property name and value aliases are supported, and properties are selected via loose matching. e.g., \p{Greek} is the same as \p{G r E e K}.
    • A new UNICODE.md document has been added to this repository that exhaustively documents support for UTS#18.
    • Empty sub-expressions are now permitted in most places. That is, ()+ is now a valid regex.
    • Almost everything in regex-syntax now uses constant stack space, even when performing anaylsis that requires structural induction. This reduces the risk of a user provided regular expression causing a stack overflow.
    • FEATURE #174: The Ast type in regex-syntax now contains span information.
    • FEATURE #424: Support \u, \u{...}, \U and \U{...} syntax for specifying code points in a regular expression.
    • FEATURE #449: Add a Replace::by_ref adapter for use of a replacer without consuming it.

    Bug fixes:

    • BUG #446: We re-enable the Boyer-Moore literal matcher.
    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(May 21, 2017)

    New features:

    • FEATURE #341: Support nested character classes and intersection operation. For example, [\p{Greek}&&\pL] matches greek letters and [[0-9]&&[^4]] matches every decimal digit except 4. (Much thanks to @robinst, who contributed this awesome feature.)

    Bug fixes:

    • BUG #321: Fix bug in literal extraction and UTF-8 decoding.
    • BUG #326: Add documentation tip about the (?x) flag.
    • BUG #333: Show additional replacement example using curly braces.
    • BUG #334: Fix bug when resolving captures after a match.
    • BUG #338: Add example that uses Captures::get to API documentation.
    • BUG #353: Fix RegexSet bug that caused match failure in some cases.
    • BUG #354: Fix panic in parser when (?x) is used.
    • BUG #358: Fix literal optimization bug with RegexSet.
    • BUG #359: Fix example code in README.
    • BUG #365: Fix bug in rure_captures_len in the C binding.
    • BUG #367: Fix byte class bug that caused a panic.
    Source code(tar.gz)
    Source code(zip)
Owner
The Rust Programming Language
The Rust Programming Language
A simple and fast linear algebra library for games and graphics

glam A simple and fast 3D math library for games and graphics. Development status glam is in beta stage. Base functionality has been implemented and t

Cameron Hart 953 Jan 3, 2023
Text Expression Runner – Readable and easy to use text expressions

ter - Text Expression Runner ter is a cli to run text expressions and perform basic text operations such as filtering, ignoring and replacing on the c

Maximilian Schulke 72 Jul 31, 2022
A naive (read: slow) implementation of Word2Vec. Uses BLAS behind the scenes for speed.

SloWord2Vec This is a naive implementation of Word2Vec implemented in Rust. The goal is to learn the basic principles and formulas behind Word2Vec. BT

Lloyd 2 Jul 5, 2018
📏 ― Uses the Jaro similarity metric to measure the distance between two strings

distance distance: Uses the Jaro similarity metric to measure the distance between two strings FYI, this was just to test Neon, I do not recommend usi

Demigender 6 Dec 7, 2021
Checks all your documentation for spelling and grammar mistakes with hunspell and a nlprule based checker for grammar

cargo-spellcheck Check your spelling with hunspell and/or nlprule. Use Cases Run cargo spellcheck --fix or cargo spellcheck fix to fix all your docume

Bernhard Schuster 274 Nov 5, 2022
Find all your TODO notes with one command!

Todo_r Find all your notes with one command! Todo_r is a simple rust command line utility that keeps track of your todo items in code. It is pronounce

Lavi Blumberg 34 Apr 22, 2022
High-performance time series downsampling algorithms for visualization

tsdownsample ?? Time series downsampling algorithms for visualization Features ✨ Fast: written in rust with PyO3 bindings leverages optimized argminma

PreDiCT.IDLab 5 Dec 8, 2022
Learning how to build real-time procedural planets

Is It Planet Yet? This is just an experiment: me trying to learn how to build real-time procedural planets. Any suggestions/contributions is welcome.

null 3 Jan 10, 2023
Gomez - A pure Rust framework and implementation of (derivative-free) methods for solving nonlinear (bound-constrained) systems of equations

Gomez A pure Rust framework and implementation of (derivative-free) methods for solving nonlinear (bound-constrained) systems of equations. Warning: T

Datamole 19 Dec 24, 2022
Multilingual implementation of RAKE algorithm for Rust

RAKE.rs The library provides a multilingual implementation of Rapid Automatic Keyword Extraction (RAKE) algorithm for Rust. How to Use Append rake to

Navid 26 Dec 16, 2022
Snips NLU rust implementation

Snips NLU Rust Installation Add it to your Cargo.toml: [dependencies] snips-nlu-lib = { git = "https://github.com/snipsco/snips-nlu-rs", branch = "mas

Snips 327 Dec 26, 2022
A fast implementation of Aho-Corasick in Rust.

aho-corasick A library for finding occurrences of many patterns at once with SIMD acceleration in some cases. This library provides multiple pattern s

Andrew Gallant 662 Dec 31, 2022
🦀 A Rust implementation of a RoBERTa classification model for the SNLI dataset

RustBERTa-SNLI A Rust implementation of a RoBERTa classification model for the SNLI dataset, with support for fine-tuning, predicting, and serving. Th

AI2 11 Oct 17, 2022
A rust implementation of some popular snowball stemming algorithms

Rust Stemmers This crate implements some stemmer algorithms found in the snowball project which are compiled to rust using the rust-backend of the sno

CurrySoftware GmbH 84 Dec 15, 2022
Implementation of sentence embeddings with BERT in Rust, using the Burn library.

Sentence Transformers in Burn This library provides an implementation of the Sentence Transformers framework for computing text representations as vec

Tyler Vergho 4 Sep 4, 2023
A naive native 128-bit cityhash v102 implementation

Naive CityHash naive-cityhash is a naive native 128-bit cityhash v102 implementation for clickhouse*. Contact Chojan Shang - @PsiACE - psiace@outlook.

Chojan Shang 5 Apr 4, 2022
A "Navie" Implementation of the Wavefront Algorithm For Sequence Alignment with Gap-Affine Scoring

A "Naive" Implementation of the Wavefront Algorithm for Sequence Alignment with Gap-Affine Scoring This repository contains some simple code that I wr

Jason Chin 3 Jul 24, 2023
Simple, robust, BitTorrent's Mainline DHT implementation

Mainline Simple, robust, BitTorrent's Mainline DHT implementation. This library is focused on being the best and simplest Rust client for Mainline, es

Nuh 4 Nov 21, 2023
A Markdown to HTML compiler and Syntax Highlighter, built using Rust's pulldown-cmark and tree-sitter-highlight crates.

A blazingly fast( possibly the fastest) markdown to html parser and syntax highlighter built using Rust's pulldown-cmark and tree-sitter-highlight crate natively for Node's Foreign Function Interface.

Ben Wishovich 48 Nov 11, 2022