A friendly parser combinator crate



A friendly parser combinator crate that makes writing LL-1 parsers with error recovery easy.


Here follows a Brainfuck parser. See examples/ for the full interpreter.

fn parser() -> impl Parser<char, Vec<Instr>> {
    use Instr::*;
    recursive(|bf| bf.delimited_by('[', ']').map(|xs| xs.map_or(Invalid, Loop))


  • Generic combinator parsing
  • Error recovery
  • Recursive parsers
  • Text-specific parsers & utilities
  • Custom error types

Planned Features

  • Nested parsers

Other Information

My apologies to Noam for choosing such an absurd name.


Lagoon is licensed under the MIT license (see LICENSE) in the main repository.

  • [Question] How to do `take_until` with `end`

    [Question] How to do `take_until` with `end`

    Hello! I'm trying to use take_until to collect some bytes unless the parser inside take_until catches a pattern or it reaches the end.

    Here is an an example what I mean:

        .map(|(chars,  _): (Vec<char>, _)| Token::Chars(chars))

    now it could happen, that I have a string like this one:


    which dosen't have a = so it wouldn't collect those a's. I've tried to do take_until(just("=").or(end()) but since the just("=") and end() parser don't return the same thing (and end() just returns ()), I'm not able to do this.

    Do you have an idea how I can those a's until the parser reached a = or the end?

    opened by TornaxO7 17
  • `repeated` lets keywords recognize as idents instead (priorization of `or` breaks?)

    `repeated` lets keywords recognize as idents instead (priorization of `or` breaks?)

    Ok I'm not sure what is going wrong here, and/or if this is a bug, I'm trying to port https://github.com/kritzcreek/fby19 into Rust to learn type inference.

    This is where the issue appears: https://github.com/Philipp-M/simple-hindley-milner-typeinference/blob/0c49d2c289efb498d23eacd81acae543a7aa4a97/src/parser.rs#L40

    Apparently the repeated rule (Expr::App) lets the keywords let parse into a Expr:Var instead of applying the let rule (it fails?). Output of the relevant test:

    ---- parser::parses_let_in stdout ----
    thread 'parser::parses_let_in' panicked at 'assertion failed: `(left == right)`
      left: `Ok(App(Var("let"), Var("hello")))`,
     right: `Ok(Let("hello", Lit(Bool(true)), Lit(Int(123))))`', src/parser.rs:70:5

    If I return atom directly (uncomment line 37), let_in parses correctly.

    As far as I understood, the ors for the atom rule are tried one after the other, so let_in should be prioritized vs Expr::Var?

    (Can be tested with cargo test)

    opened by Philipp-M 10
  • Can't specify a single value with `just`

    Can't specify a single value with `just`

    I must be missing something obvious. As a first step, I am writing a parser for just one single u64:

    #[derive(Clone, Debug, PartialEq)]
    enum Expr {
    fn expr_parser() -> impl Parser<char, Vec<Expr>, Error = Simple<Token>> {
        let number = just('-').or_not()
            .map(|s| s.parse::<u64>().unwrap())
        just(number.map(Expr::Value2))  // <--- error here

    The error:

    error[E0277]: can't compare `chumsky::combinator::Map<Label<chumsky::combinator::Map<chumsky::combinator::Map<chumsky::combinator::Map<Then<OrNot<Just<char, _>>, impl chumsky::Parser<char, <char as Character>::Collection>+Copy+Clone>, fn((Option<char>, String)) -> Vec<_>, (Option<char>, String)>, fn(Vec<_>) -> String, Vec<_>>, [[email protected]\main.rs:78:14: 78:43], String>, &str>, fn(u64) -> Expr {Expr::Value2}, u64>` with `chumsky::combinator::Map<Label<chumsky::combinator::Map<chumsky::combinator::Map<chumsky::combinator::Map<Then<OrNot<Just<char, _>>, impl chumsky::Parser<char, <char as Character>::Collection>+Copy+Clone>, fn((Option<char>, String)) -> Vec<_>, (Option<char>, String)>, fn(Vec<_>) -> String, Vec<_>>, [[email protected]\main.rs:78:14: 78:43], String>, &str>, fn(u64) -> Expr {Expr::Value2}, u64>`
      --> src\main.rs:81:10
    81 |     just(number.map(Expr::Value2))
       |          ^^^^^^^^^^^^^^^^^^^^^^^^ no implementation for `chumsky::combinator::Map<Label<chumsky::combinator::Map<chumsky::combinator::Map<chumsky::combinator::Map<Then<OrNot<Just<char, _>>, impl chumsky::Parser<char, <char as Character>::Collection>+Copy+Clone>, fn((Option<char>, String)) -> Vec<_>, (Option<char>, String)>, fn(Vec<_>) -> String, Vec<_>>, [[email protected]\main.rs:78:14: 78:43], String>, &str>, fn(u64) -> Expr {Expr::Value2}, u64> == chumsky::combinator::Map<Label<chumsky::combinator::Map<chumsky::combinator::Map<chumsky::combinator::Map<Then<OrNot<Just<char, _>>, impl chumsky::Parser<char, <char as Character>::Collection>+Copy+Clone>, fn((Option<char>, String)) -> Vec<_>, (Option<char>, String)>, fn(Vec<_>) -> String, Vec<_>>, [[email protected]\main.rs:78:14: 78:43], String>, &str>, fn(u64) -> Expr {Expr::Value2}, u64>`
      ::: C:\Users\cedri\.cargo\registry\src\github.com-1ecc6299db9ec823\chumsky-0.5.0\src\primitive.rs:99:24
    99 | pub fn just<I: Clone + PartialEq, E>(x: I) -> Just<I, E> {
       |                        --------- required by this bound in `chumsky::primitive::just`
       = help: the trait `PartialEq` is not implemented for `chumsky::combinator::Map<Label<chumsky::combinator::Map<chumsky::combinator::Map<chumsky::combinator::Map<Then<OrNot<Just<char, _>>, impl chumsky::Parser<char, <char as Character>::Collection>+Copy+Clone>, fn((Option<char>, String)) -> Vec<_>, (Option<char>, String)>, fn(Vec<_>) -> String, Vec<_>>, [[email protected]\main.rs:78:14: 78:43], String>, &str>, fn(u64) -> Expr {Expr::Value2}, u64>`

    The map function that gets called is inside chumsky and it returns a Map<...>, so it makes sense that this value cannot be compared.

    What am I missing to parse a single number?

    opened by cbeust 9
  • Add 'recover_via' parser

    Add 'recover_via' parser

    Adds a new parser (currently named recover_via - I'm open to suggestions) which attempts the first parser and returns its result on success. On error it will try the given recovery parser and if that succeeds it will return that result but with the errors from the first parser. In the case both parsers fail, the second is considered to be a failed recovery strategy and only the errors from the first are returned. This allows you to chain multiple successive recover_via calls.

    This parser is similar to Parser::or except that it returns the errors of the first parser even if the second succeeds.

    This parser is also similar to recover_with(Strategy) though it accepts any parser as an argument rather than only an existing strategy, and is thus more general and easier to extend for users (at least with the current design of Strategy).

    opened by jfecher 7
  • Implement support for `no_std`

    Implement support for `no_std`

    I added a std feature to support building with no_std.

    If enabled, it implements the Error trait and uses std's HashSet. Otherwise, it doesn't implement Error and uses hashbrown's HashSet.

    However, there are a few details I'd like to hear your opinion on if I need to make some changes.

    • no_std is disabled in documentation.
    • Because of rust-lang/cargo#1839, the dependency on hashbrown can't be optional. It could probably be if there was a no_std feature instead, but according to the feature reference in the Cargo book, features should be additive, and a std feature should be used over a no_std one.
    • I looked into hashbrown and found that it uses ahash by default (through a default feature), so I removed the direct dependency on it and used it through hashbrown. This may be undesirable.
    • The Verbose struct in the debug module currently uses the print! macros. Since those do not exist in a no_std environment, the function that uses them was changed to be a no-op when std is disabled.

    (Closes #38)

    opened by wackbyte 7
  • Rewind parser

    Rewind parser

    Hi, continuously I perfectly enjoying parsing with this amazing crate. Today I introduce lookahead combinator. I don't believe that this should be merged, but this is a little bit convenience for some case. Also, I am not confident on the naming and design of this new combinator.


    I want to parse something like this;

    a b c - d



    But this fails at -.

    So I introduce lookahead and change the parser like below.


    Compare to other designs

    I choose the one I think it's most fits to chumsky's APIs.


    This is like very natural, but always ignores the outputs of the lookahead parser. In some case, we may want not to ignore, such as infix attribute syntax that have influence to the both side.


    This is also natural, and allows us to choose whether ignore the output or not. However, this is odd in chumsky's conventions.

    opened by ryo33 7
  • Zero copy cleanup

    Zero copy cleanup

    Decided since I am super excited for the Zero-Copy implementation that is up-and-coming, I would try an contribute a little back in the form of some small cleanup commits.

    Most of these were just some bits that I found while perusing through the code, and thought they could be done a tad simpler. Let me know if I missed something, or if something needs changing!

    Thanks for your time!

    opened by Zij-IT 6
  • Update the pythonic example to demonstrate how to create a flatten token stream from a token tree

    Update the pythonic example to demonstrate how to create a flatten token stream from a token tree

    This PR adds a step to the pythonic example to convert a token tree from the lexer into a flat token stream using Stream::from_nested.

    This would not be a long-term solution because it's quite verbose, but it will work fine for now.

    Relates to #20.

    opened by tatsuya6502 6
  • How to preserve spans from the lexer into the parser?

    How to preserve spans from the lexer into the parser?

    I'm struggling to access char spans in my parser when using a two-step lexer+parser approach:

    My lexer looks like:

    Parser<char, Vec<(Token, Span)>, Error = Simple<char>>

    and I started working on a parser that looks like:

    Parser<(Token, Span), WithSpan<Expr>, Error = Simple<(Token, Span)>>

    In nano_rust.rs, the parser only accepts Token instead of (Token, Span).

    I'm going to try doing the same and then using the token spans provided by .map_with_span() in the parser to look up the span in the original char stream from the Vet<(Token, Span)> produced by the lexer.

    I was wondering if this is the recommended way of accessing char spans from the parser when using a two-step parsing approach?

    opened by kevinbarabash 6
  • Make clippy happy

    Make clippy happy

    Run clippy on the codebase, fixed all but one or two warnings. Did each fix as its own commit, if you want to only pick some I'll scrap the rest, or if you don't want any I can close this. If you want a minimal set, I'd at least bench redundant_clone and needless_range_loop, as they might have an optimization impact. And deprecated_semver looks like an actual fix, if a relatively insignificant one.

    opened by CraftSpider 6
  • `separated_by` does not work as expected

    `separated_by` does not work as expected


    Thanks for this wonderful library! I've been using chumsky 0.5.0 on a project and it works great! However, I am baffled by a weird behavior of separated_by, which I think may be a potential bug.

    Here's a simplified piece of code that reproduces the problem I met:

    use chumsky::prelude::*;
    fn main() {
      let digit_list =
        one_of::<char, _, Simple<_>>("123".chars()).separated_by(just(','));
      let letter_list = one_of("abc".chars()).separated_by(just(','));
      let parser = digit_list.clone().or(letter_list);
      // works as expected: this line works as expected
      assert_eq!(digit_list.parse("1,2,3"), Ok(vec!['1', '2', '3']));
      // works as expected: trailing tokens are ignored
      assert_eq!(digit_list.parse("1,2,3X"), Ok(vec!['1', '2', '3']));
      // does not work as expected: the trailing "," is not ignored, where I expect it to be ignored as other trailing tokens
      // expected: Ok(['1', '2', '3'])
      // actual: Err([Simple { span: 6..7, reason: Unexpected, expected: {'2', '1', '3'}, found: None, label: None }])
      assert_eq!(digit_list.parse("1,2,3,"), Ok(vec!['1', '2', '3']));
      // does not work as expected. This result is even weirder. In any case I don't expect an empty list to be returned.
      // expected: Ok(['1', '2', '3'])
      // actual: Ok([])
      assert_eq!(parser.parse("1,2,3,"), Ok(vec!['1', '2', '3']))

    Basically, the trailing separator is not correctly treated as normal trailing tokens and would trigger an error when encountered.

    P.S. I knew the existence of the allow_trailing option. This example is simplified and doesn't fully represent my use-case, where I cannot allow the trailing separators because I need to leave it out for other part of the program to handle it.

    opened by shouya 6
  • About `map_slice` in zero-copy

    About `map_slice` in zero-copy

    Extracted from https://github.com/zesterer/chumsky/pull/240#issuecomment-1369193646

    I'd be interested in an explanation of what map_slice does, and how/when to actually use it. I saw one example using it as ...map_slice(|x| x) but it's not clear to me what it is for eyes

    Reply from @zesterer:

    map_slice is a zero-copy thing, it gives you the slice of the input that the parser covered. For example:

    let digits = one_of("0123456789").repeated().map_slice(|x| x);
    assert_eq!(digits.parse("743").unwrap(), "743");

    Because the parser above is just pulling a slice from the input, no allocation needs to occur! On master, this would require collecting each individual digit into a new String.

    Continuing this question/discussion here to avoid 'polluting' that PR

    With this info and after digging into the sources, I think I fully understand it now.. and I have to say: IT'S BRILLIANT!

    The way you implemented the zero-copy system, with different modes, using check/emit parsing as needed for the different parsers is JUST PERFECT :heart: :heart: map_slice (currently undocumented) seems to be one of the core combinator of the zero-copy system, I think it'll be important to emphasis it's importance! (and how it works)

    Idea: make it the default for sliceable repeated combinator (and others) ?

    Thinking a bit more, have you considered making it more 'visible' (increase the chance people will use it 'by default') by having it integrated into the repeated combinator? (and others that outputs 'lists' of X) The only consideration I see is that repeated can be used on inputs that are not SliceInput.

    :point_right: Would it be possible(/a good idea?) to make a dedicated repeated implementation that is automatically using map_slice(|x| x) when the input is a SliceInput ?

    Question: Handling of ignore_then ?

    Can't manage to test it locally (can't find how to type the parser correctly..), so this is hypothesis only

    I'm wondering about the handling of something like this:

    let hex_digits = one_of("0123456789ABCDEF").repeated().at_least(1);
    let verbose_hex = just("0x").ignore_then(hex_digits);
    // later
    let only_hex_digits = verbose_hex.map_slice(|x| x);

    According to the implementation: https://github.com/zesterer/chumsky/blob/1d5f110ec5db3b91dd4a57fb84a0de85895a8d98/src/zero_copy/combinator.rs#L33-L39 The cursor is saved before consuming any input from the verbose_hex parser, so before parsing 0x, leading to a slice that include 0x, right? (and same issue with then_ignore at the end of the pattern, or a whole delimited_by, ..)

    If so, it kinda looks like a bug, do you think there's a another way to implement map_slice to work properly? (without putting the map_slice in verbose_hex)

    opened by bew 1
  • add the ability to have subslice parser combinators

    add the ability to have subslice parser combinators

    I want to be able to implement parser combinators where the second parser parses only the input slice the previous parser has parsed. E.g. an AndIs implementation where the second parser is limited to the input the first parser has parsed.

    As I'm making changes that feel more to the core than my previous I wanted to ask if I'm on the right track implementation wise and if such functionality would be welcome.

    I've added a (badly named) trait Slice that is a SliceInput with the additional constraint Self = SliceInput::Slice. The use case is for parser combinators like the one described above so that both parser can have the same error type. Then there is the with_subslice function that limits the input in InputRef for the passed closure to the passed range.

    opened by herkhinah 1
  • zero_copy::combinator::AndIs behavior

    zero_copy::combinator::AndIs behavior

    From the wording AndIs I would expect the second parser to only get the section the first parser parsed as input. This would allow the second parser to make use of end() to check if it parses the same amount of input as the first parser.

    The behavior of the current implementation seems to be, that the second parser can parse more input than the first parser, while at the same time after the second parser finishes the input is rewinded to the position of the first parser.

    opened by herkhinah 3
  • How to do error recovery that *adds* tokens

    How to do error recovery that *adds* tokens

    Hey, I'm working on my own language using this parser, and all the error recovery options appear to be for skipping/removing/consuming tokens. I'm not sure if what I am thinking is the proper way to go about it, but if I have a language construct like a function arguments my_function(1, 2, 3) and I miss a comma my_function(1, 2 3) I think that the proper move here would be an error recovery strategy that would add a , comma token and see if that fixed the input. I don't know though, thoughts?

    opened by MalekiRe 1
  • Syntax suggestion for select!

    Syntax suggestion for select!

    I've seen you updated select to optionally accept a span in master. However with the way it is implemented right now, span must be defined on every enum path. I've used the following implementation quite successfully:

    macro_rules! select {
        (|$span:ident| $($p:pat $(if $guard:expr)? => $out:expr),+ $(,)?) => ({
            chumsky::primitive::filter_map(move |$span, x| match x {
                $($p $(if $guard)? => ::core::result::Result::Ok($out)),+,
                _ => ::core::result::Result::Err(chumsky::error::Error::expected_input_found($span, ::core::option::Option::None, ::core::option::Option::Some(x))),
        ($($p:pat $(if $guard:expr)? => $out:expr),+ $(,)?) => (select!(|_span| $($p $(if $guard)? => $out),+));

    This way span can only be specified once: select!(|span| Token::Literal(s) => (span, Value::Literal(s)))

    I think this also easier to read, but that's probably a matter of opinion. If you want I can create a PR

    opened by muja 1
  • TODO in nano_rust

    TODO in nano_rust

    In nano_rust.rs there this todo:

                // Expressions, chained by semicolons, are statements
                .foldl(|a, b| {
                    let span = a.1.clone(); // TODO: Not correct
                            Box::new(match b {
                                Some(b) => b,
                                None => (Expr::Value(Value::Null), span.clone()),

    I was just curious why it isn't correct. Is it that the span is just from the first expression? Would the correct fix be this?

                    let a_start = a.1.start;
                    let b_end = b.as_ref().map(|b| b.1.end).unwrap_or(a.1.end);
                            Box::new(match b {
                                Some(b) => b,
                                None => (Expr::Value(Value::Null), b_end..b_end),

    In other words, the span for the whole expression is from the start of a to the end of b (or end of a if b is None), and then the span for b is an empty slice at the end of a if b is None.

    Is that right?

    opened by Timmmm 2
  • 0.8(Feb 7, 2022)

    [0.8.0] - 2022-02-07


    • then_with combinator to allow limited support for parsing nested patterns
    • impl From<&[T; N]> for Stream
    • SkipUntil/SkipThenRetryUntil::skip_start/consume_end for more precise control over skip-based recovery


    • Allowed Validate to map the output type
    • Switched to zero-size End Of Input spans for default implementations of Stream
    • Made delimited_by take combinators instead of specific tokens
    • Minor optimisations
    • Documentation improvements


    • Compilation error with --no-default-features
    • Made default behaviour of skip_until more sensible
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Dec 16, 2021)

    [0.7.0] - 2021-12-16


    • A new tutorial to help new users

    • select macro, a wrapper over filter_map that makes extracting data from specific tokens easy

    • choice parser, a better alternative to long or chains (which sometimes have poor compilation performance)

    • todo parser, that panics when used (but not when created) (akin to Rust's todo! macro, but for parsers)

    • keyword parser, that parses exact identifiers

    • from_str combinator to allow converting a pattern to a value inline, using std::str::FromStr

    • unwrapped combinator, to automatically unwrap an output value inline

    • rewind combinator, that allows reverting the input stream on success. It's most useful when requiring that a pattern is followed by some terminating pattern without the first parser greedily consuming it

    • map_err_with_span combinator, to allow fetching the span of the input that was parsed by a parser before an error was encountered

    • or_else combinator, to allow processing and potentially recovering from a parser error

    • SeparatedBy::at_most to require that a separated pattern appear at most a specific number of times

    • SeparatedBy::exactly to require that a separated pattern be repeated exactly a specific number of times

    • Repeated::exactly to require that a pattern be repeated exactly a specific number of times

    • More trait implementations for various things, making the crate more useful


    • Made just, one_of, and none_of significant more useful. They can now accept strings, arrays, slices, vectors, sets, or just single tokens as before
    • Added the return type of each parser to its documentation
    • More explicit documentation of parser behaviour
    • More doc examples
    • Deprecated seq (just has been generalised and can now be used to parse specific input sequences)
    • Sealed the Character trait so that future changes are not breaking
    • Sealed the Chain trait and made it more powerful
    • Moved trait constraints on Parser to where clauses for improved readability


    • Fixed a subtle bug that allowed separated_by to parse an extra trailing separator when it shouldn't
    • Filled a 'hole' in the Error trait's API that conflated a lack of expected tokens with expectation of end of input
    • Made recursive parsers use weak reference-counting to avoid memory leaks
    Source code(tar.gz)
    Source code(zip)
Joshua Barretto
Interested in things. He/him.
Joshua Barretto
Rust parser combinator framework

nom, eating data byte by byte nom is a parser combinators library written in Rust. Its goal is to provide tools to build safe parsers without compromi

Geoffroy Couprie 7.6k Jan 7, 2023
A fast monadic-style parser combinator designed to work on stable Rust.

Chomp Chomp is a fast monadic-style parser combinator library designed to work on stable Rust. It was written as the culmination of the experiments de

Martin Wernstål 228 Oct 31, 2022
A parser combinator library for Rust

combine An implementation of parser combinators for Rust, inspired by the Haskell library Parsec. As in Parsec the parsers are LL(1) by default but th

Markus Westerlind 1.1k Dec 28, 2022
A parser combinator for parsing &[Token].

PickTok A parser combinator like nom but specialized in parsing &[Token]. It has similar combinators as nom, but also provides convenient parser gener

Mikuto Matsuo 6 Feb 24, 2023
Website for Microformats Rust parser (using 'microformats-parser'/'mf2')

Website for Microformats Rust parser (using 'microformats-parser'/'mf2')

Microformats 5 Jul 19, 2022
A Rust crate for LL(k) parser combinators.

oni-comb-rs (鬼昆布,おにこんぶ) A Rust crate for LL(k) parser combinators. Main project oni-comb-parser-rs Sub projects The following is projects implemented

Junichi Kato 24 Nov 3, 2022
A native Rust port of Google's robots.txt parser and matcher C++ library.

robotstxt A native Rust port of Google's robots.txt parser and matcher C++ library. Native Rust port, no third-part crate dependency Zero unsafe code

Folyd 72 Dec 11, 2022
url parameter parser for rest filter inquiry

inquerest Inquerest can parse complex url query into a SQL abstract syntax tree. Example this url: /person?age=lt.42&(student=eq.true|gender=eq.'M')&

Jovansonlee Cesar 25 Nov 2, 2020
Parsing Expression Grammar (PEG) parser generator for Rust

Parsing Expression Grammars in Rust Documentation | Release Notes rust-peg is a simple yet flexible parser generator that makes it easy to write robus

Kevin Mehall 1.2k Dec 30, 2022
LR(1) parser generator for Rust

LALRPOP LALRPOP is a Rust parser generator framework with usability as its primary goal. You should be able to write compact, DRY, readable grammars.

null 2.4k Jan 7, 2023
The Elegant Parser

pest. The Elegant Parser pest is a general purpose parser written in Rust with a focus on accessibility, correctness, and performance. It uses parsing

pest 3.5k Jan 8, 2023
A typed parser generator embedded in Rust code for Parsing Expression Grammars

Oak Compiled on the nightly channel of Rust. Use rustup for managing compiler channels. You can download and set up the exact same version of the comp

Pierre Talbot 138 Nov 25, 2022
Rust query string parser with nesting support

What is Queryst? This is a fork of the original, with serde and serde_json updated to 0.9 A query string parsing library for Rust inspired by https://

Stanislav Panferov 67 Nov 16, 2022
A fast, extensible, command-line arguments parser

parkour A fast, extensible, command-line arguments parser. Introduction ?? The most popular argument parser, clap, allows you list all the possible ar

Ludwig Stecher 18 Apr 19, 2021
Soon to be AsciiDoc parser implemented in rust!

pagliascii "But ASCII Doc, I am Pagliascii" Soon to be AsciiDoc parser implemented in rust! This project is the current implementation of the requeste

Lukas Wirth 49 Dec 11, 2022
An LR parser generator, implemented as a proc macro

parsegen parsegen is an LR parser generator, similar to happy, ocamlyacc, and lalrpop. It currently generates canonical LR(1) parsers, but LALR(1) and

Ömer Sinan Ağacan 5 Feb 28, 2022
A rusty, dual-wielding Quake and Half-Life texture WAD parser.

Ogre   A rusty, dual-wielding Quake and Half-Life texture WAD parser ogre is a rust representation and nom parser for Quake and Half-Life WAD files. I

Josh Palmer 16 Dec 5, 2022
A modern dialogue executor and tree parser using YAML.

A modern dialogue executor and tree parser using YAML. This crate is for building(ex), importing/exporting(ex), and walking(ex) dialogue trees. convo

Spencer Imbleau 27 Aug 3, 2022
Minimalist pedantic command line parser

Lexopt Lexopt is an argument parser for Rust. It tries to have the simplest possible design that's still correct. It's so simple that it's a bit tedio

Jan Verbeek 155 Dec 28, 2022