A set of bison skeleton files that can be used to generate a Bison grammar that is written in Rust.

Overview

rust-bison-skeleton

A set of bison skeleton files that can be used to generate a Bison grammar that is written in Rust.

Technically it's more like a Bison frontend for Rust.

Requirements

  • Rust
  • Bison 3.7.3 or higher (or maybe a bit lower, it's unknown, better get the latest version)

bison executable must be available in $PATH.

Short explanation

Bison is a parser generator, and in fact it doesn't really care what's your programming language.

Under the hood it takes your .y file, parses it, extracts all derivations and then constructs a bunch of tables. Then, this data is passed to a template that is called skeleton. Simply treat it as JSX/ERB/Handlebars/etc view template.

This skeleton is a special file written in M4 language (that is not really a programming language, it's closer to a macro engine) that (once rendered) prints your .rs file. As simple as that.

Configuration

Just like in C/C++/Java/D templates the following directives can be configured:

  • %expect N where N is a number of expected conflicts. Better set it to 0
  • %define api.parser.struct {Parser} where Parser is the name of your parser struct. Optional, Parser is the default name.
  • %define api.value.type {Value} where Value is the name of the derivation result (and a stack item) struct. Optional, Value is the default name.
  • %code use { } allows you to specify a block of code that will be at the top of the file. Can be a multi-line block, optional, has no default value.
  • %code parser_fields { } allows you to specify additional custom fields for your Parser struct. Can be a multi-line block, optional, has no default value.
  • %define api.parser.check_debug { /* expr */ } allows you to configure printing debug information, self is an instance of your parser, so use something like this if you want to turn it into configurable field:
%code parser_fields {
    debug: bool
}
%define api.parser.check_debug { self.debug }

All other directives that available in Bison can be configured too, read official Bison docs.

Basic usage

This skeleton generates an LALR(1) parser, and so parser has a stack. This stack is represented as Vec where Value is an enum that must be defined by you. The name of this enum must be set using %define api.value.type {} directive.

Let's build a simple calculator that handles lines like 1 + (4 - 3) * 2.

First, let's define a boilerplate in src/parser.y:

%expect 0

%define api.parser.struct {Parser}
%define api.value.type {Value}

%define parse.error custom
%define parse.trace

%code use {
    // all use goes here
    use crate::Loc;
}

%code parser_fields {
    // custom parser fields
}

%token
    tPLUS   "+"
    tMINUS  "-"
    tMUL    "*"
    tDIV    "/"
    tLPAREN "("
    tRPAREN ")"
    tNUM    "number"

%left "-" "+"
%left "*" "/"

%%

// rules

%%

impl Parser {
    // parser implementation
}

enum Value {
    // variants to define
}

Currently this grammar has no rules, but it's a good start.

This code (once compiled) defines a Parser struct at the top of the file that looks like this:

#[derive(Debug)]
pub struct Parser {
    pub yylexer: Lexer,
    yy_error_verbose: bool,
    yynerrs: i32,
    yyerrstatus_: i32,

    /* "%code parser_fields" blocks.  */
}

Keep in mind that Parser auto-implements std::fmt::Debug, and so all custom fields also should implement it.

Value enum is what is returned by derivations and what's stored in the stack of the parser. This enum must be defined by you and it has to have the following variants:

  • Uninitialized - a variant that is stored in $$ by default (and what's overwritten by you)
  • Stolen - a variant that stack value is replaced with when you get it from the stack by writing $
  • Token(TokenStruct) - a variant that is used when shift if performed, holds your TokenStruct that is returned by a lexer

Additionally you can have as many variants as you want, however they must represent what you return from derivation rules.

In our case we want variants Number (to represent a numeric expression) and None (this is actually required to represent return value of the top-level rule).

#[derive(Clone, Debug)]
pub enum Value {
    None,
    Uninitialized,
    Stolen,
    Token(Token),
    Number(i32),
}

impl Default for Value {
    fn default() -> Self {
        Self::Stolen
    }
}

It must implement Clone, Debug and Default (.take() is used under the hood that swaps &mut Value with Value::default(), so default() must return Stolen variant).

Also skeleton defines a Lexer struct with a bunch of constants representing token numbers, it looks like this:

// AUTO-GENERATED
impl Lexer {
    /* Token kinds.  */
    // Token "end of file", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const YYEOF: i32 = 0;
    // Token error, to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const YYerror: i32 = 256;
    // Token "invalid token", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const YYUNDEF: i32 = 257;
    // Token "+", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const tPLUS: i32 = 258;
    // Token "-", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const tMINUS: i32 = 259;
    // Token "*", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const tMUL: i32 = 260;
    // Token "/", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const tDIV: i32 = 261;
    // Token "(", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const tLPAREN: i32 = 262;
    // Token ")", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const tRPAREN: i32 = 263;
    // Token "number", to be returned by the scanner.
    #[allow(non_upper_case_globals, dead_code)]
    pub const tNUM: i32 = 264;
}

Thus, we can define our lexer logic:

Token { match value { Value::Token(v) => v, other => panic!("expected Token, got {:?}", other), } } } #[allow(non_upper_case_globals)] impl Lexer { pub fn new(src: &str) -> Self { let mut tokens = vec![]; for (idx, c) in src.chars().enumerate() { let (token_type, token_value) = match c { '0' => (Self::tNUM, 0), '1' => (Self::tNUM, 1), '2' => (Self::tNUM, 2), '3' => (Self::tNUM, 3), '4' => (Self::tNUM, 4), '5' => (Self::tNUM, 5), '6' => (Self::tNUM, 6), '7' => (Self::tNUM, 7), '8' => (Self::tNUM, 8), '9' => (Self::tNUM, 9), '+' => (Self::tPLUS, -1), '-' => (Self::tMINUS, -1), '*' => (Self::tMUL, -1), '/' => (Self::tDIV, -1), '(' => (Self::tLPAREN, -1), ')' => (Self::tRPAREN, -1), ' ' => continue, _ => panic!("unknown char {}", c), }; let token = Token { token_type, token_value, loc: Loc { begin: idx, end: idx + 1, }, }; tokens.push(token) } tokens.push(Token { token_type: Self::YYEOF, token_value: 0, loc: Loc { begin: src.len(), end: src.len() + 1, }, }); Self { tokens } } pub(crate) fn yylex(&mut self) -> Token { self.tokens.remove(0) } }">
use crate::{Loc, Value};

/// A token that is emitted by a lexer and consumed by a parser
#[derive(Clone)]
pub struct Token {
    // Required field, used by a skeleton
    pub token_type: i32,

    // Optional field, used by our custom parser
    pub token_value: i32,

    // Required field, used by a skeleton
    pub loc: Loc,
}

/// `Debug` implementation
impl std::fmt::Debug for Token {
    fn fmt(&self, f: &mut std::fmt::Formatter<'_> /*' fix quotes */) -> std::fmt::Result {
        f.write_str(&format!(
            "[{}, {:?}, {}...{}]",
            token_name(self.token_type()),
            self.token_value,
            self.loc.begin,
            self.loc.end
        ))
    }
}

impl Token {
    /// Used by a parser to "unwrap" `Value::Token` variant into a plain Token value
    pub(crate) fn from(value: Value) -> Token {
        match value {
            Value::Token(v) => v,
            other => panic!("expected Token, got {:?}", other),
        }
    }
}


#[allow(non_upper_case_globals)]
impl Lexer {
    pub fn new(src: &str) -> Self {
        let mut tokens = vec![];

        for (idx, c) in src.chars().enumerate() {
            let (token_type, token_value) = match c {
                '0' => (Self::tNUM, 0),
                '1' => (Self::tNUM, 1),
                '2' => (Self::tNUM, 2),
                '3' => (Self::tNUM, 3),
                '4' => (Self::tNUM, 4),
                '5' => (Self::tNUM, 5),
                '6' => (Self::tNUM, 6),
                '7' => (Self::tNUM, 7),
                '8' => (Self::tNUM, 8),
                '9' => (Self::tNUM, 9),
                '+' => (Self::tPLUS, -1),
                '-' => (Self::tMINUS, -1),
                '*' => (Self::tMUL, -1),
                '/' => (Self::tDIV, -1),
                '(' => (Self::tLPAREN, -1),
                ')' => (Self::tRPAREN, -1),
                ' ' => continue,
                _ => panic!("unknown char {}", c),
            };
            let token = Token {
                token_type,
                token_value,
                loc: Loc {
                    begin: idx,
                    end: idx + 1,
                },
            };
            tokens.push(token)
        }
        tokens.push(Token {
            token_type: Self::YYEOF,
            token_value: 0,
            loc: Loc {
                begin: src.len(),
                end: src.len() + 1,
            },
        });

        Self { tokens }
    }

    pub(crate) fn yylex(&mut self) -> Token {
        self.tokens.remove(0)
    }
}

This lexer is not buffered and it does unnecessary work in case of a syntax error, but let's use at it's easier to understand.

Now let's define Parser <-> Lexer composition:

impl Parser {
    pub fn new(lexer: Lexer) -> Self {
        Self {
            yy_error_verbose: true,
            yynerrs: 0,
            yyerrstatus_: 0,
            yylexer: lexer,
        }
    }

    fn next_token(&mut self) -> Token {
        self.yylexer.yylex()
    }

    fn report_syntax_error(&self, ctx: &Context) {
        eprintln!("syntax error: {:#?}", ctx)
    }
}

Parser encapsulates Lexer and calls it in a next_token method that is called by a skeleton.

Time to define rules:

%type 
    expr number program

%%

 program: expr
            {
                self.result = Some($
   1);
                
   $$ = Value::None;
            }
        
   | 
   error
            {
                self.result = None;
                
   $$ = Value::None;
            }

    
   expr: 
   number
            {
                
   $$ = 
   $1;
            }
        
   | 
   tLPAREN 
   expr 
   tRPAREN
            {
                
   $$ = 
   $2;
            }
        
   | 
   expr 
   tPLUS 
   expr
            {
                
   $$ = Value::Number($
   
    1 + $
    
     3);
            }
        
     | 
     expr 
     tMINUS 
     expr
            {
                
     $$ = Value::Number($
     
      1 - $
      
       3);
            }
        
       | 
       expr 
       tMUL 
       expr
            {
                
       $$ = Value::Number($
       
        1 * $
        
         3); } 
         | 
         expr 
         tDIV 
         expr { 
         $$ = Value::Number($
         
          1 / $
          
           3); } 
           number: 
           tNUM { 
           $$ = Value::Number($
           
            1.token_value()); } %%
           
          
         
        
       
      
     
    
   
  

As you can see our grammar has the following rules:

program: expr
       | error

   expr: number
       | '(' number ')'
       | number '+' number
       | number '-' number
       | number '*' number
       | number '/' number

 number: [0-9]

$$ is a return value and it has type Value. You can use $1, $2, etc to get items 1, 2, etc that are no unwrapped, i.e. that also have type Value. To unwrap it you can use $ 1 , but then you must have the following method:

impl Variant {
    fn from(value: Value) -> Self {
        match value {
            Value::Variant(out) => out,
            other => panic!("wrong type, expected Variant, got {:?}", other),
        }
    }
}

In our case we want to have only one such variant - Number:

use crate::Value;

#[allow(non_snake_case)]
pub(crate) mod Number {
    use super::Value;

    pub(crate) fn from(value: Value) -> i32 {
        match value {
            Value::Number(out) => out,
            other => panic!("wrong type, expected Number, got {:?}", other),
        }
    }
}

Yes, it's a mod, but that's absolutely OK. It doesn't matter what Variant is, it's all about calling Variant::from(Value).

Also, as you might notice, there's a self.result = ... assignment in the top-level rule program. The reason why it's required is that there's no way to get value that is left on the stack because stack is not a part of the parser's state.

This is why we also need to declare it:

%code parser_fields {
    result: Option
   ,
}


   // And Parser's constructor must return
fn 
   new(lexer: Lexer) -> Self {
    Self {
        result: None,
        
   // ...
    }
}
  

Now we need a build.rs script:

{} Err(BisonErr { message, .. }) => { eprintln!("Bison error:\n{}\nexiting with 1", message); std::process::exit(1); } } }">
extern crate rust_bison_skeleton;
use rust_bison_skeleton::{process_bison_file, BisonErr};
use std::path::Path;

fn main() {
    match process_bison_file(&Path::new("src/parser.y")) {
        Ok(_) => {}
        Err(BisonErr { message, .. }) => {
            eprintln!("Bison error:\n{}\nexiting with 1", message);
            std::process::exit(1);
        }
    }
}

And so after running cargo build we should get src/parser.rs with all auto-generated and manually written code combined into a single file.

You can find a full example in tests/src/calc.y.

Error recovery

This skeleton full matches behavior of other built-in Bison skeletons:

  • If you want to return an error from a derivation you can either:
    • do return Ok(Self::YYERROR);
    • or just Err(())?
  • If you want to completely abort execution you can:
    • return Ok(Self::YYACCEPT); to abort with success-like status code
    • return Ok(Self::YYABORT); to abort with error-like status code

Once error is returned a special error rule can catch and "swallow" it:

 numbers: number
          {
              $$ = Value::NumbersList(vec![ $Number<1> ]);
          }
        | numbers number
          {
              $$ = Value::NumbersList( $
   1.
   append($
   
    2) );
          }
        | error number
          {
              
    // ignore $1 and process only $
     
      2
     
              $$ = 
    Value::NumbersList(vec![ $Number<
    2> ]);
          }


  number: tNUM         { $$ = $
    1 }
        | tINVALID_NUM { 
    return 
    Ok(Self::YYERROR); }
   
  

Information about the error is automatically passed to Parser::report_syntax_error. Context that it takes has methods token() and location(), so implementation of this method can look like this:

fn report_syntax_error(&mut self, ctx: &Context) {
    let token_id: usize = ctx.token().code().try_into().unwrap();
    let token_name: &'static str = Lexer::TOKEN_NAMES[id];
    let error_loc: &Loc = ctx.location();

    eprintln!("Unexpected token {} at {:?}", token_name, loc);
}

Generic parser

To make Parser generic you need to configure the following directive:

%define api.parser.generic {
   }
  

This code is added to struct Parser and impl Parser:

struct Parser
    {
    
   // ...
}


   impl
    
    Parser
    
      {
    
     // ...
}
    
   
  

If you wan to specify lifetimes make sure to fix quotes with comments:

%define api.parser.generic {<'a /* 'fix quotes */, T>}

Performance

You can find a perf example that runs a Parser thousands times and creates a flamegraph.

Comments
  • Error on Cargo build when using process_bison_file

    Error on Cargo build when using process_bison_file

    When I am using cargo build on any project with process_bison_file in the custom build script it fails with the following error:

    Compiling tests v0.0.1 (C:\Users\PATH\bisonProject\rust-bison-skeleton\tests)
    error: failed to run custom build command for `tests v0.0.1 (C:\Users\PATH\bisonProject\rust-bison-skeleton\tests)`
    
    Caused by:
      process didn't exit successfully: `C:\Users\PATH\bisonProject\rust-bison-skeleton\target\debug\build\tests-7f8f08867fde0eae\build-script-build` (exit code: 101)
      --- stderr
      CARGO_MANIFEST_DIR = "C:\\Users\\PATH\\bisonProject\\rust-bison-skeleton\\rust-bison-skeleton"
      file = "rust-bison-skeleton\\src\\lib.rs"
      current dir = Ok("C:\\Users\\PATH\\bisonProject\\rust-bison-skeleton\\tests")
      thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: NotFound, message: "program not found" }', rust-bison-skeleton\src\lib.rs:53:60
      note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    

    And as is visible in the error it even happens when using the test in the clone of this repo. I also have tried to fit the path to the OS. but that doesn't work either.

    Any idea how to fix it? (Or even what exactly the problem is)

    Using Win10

    opened by Larendol 3
  • Update jemallocator requirement from 0.3.2 to 0.5.0 in /scripts

    Update jemallocator requirement from 0.3.2 to 0.5.0 in /scripts

    Updates the requirements on jemallocator to permit the latest version.

    Changelog

    Sourced from jemallocator's changelog.

    0.5.0 - 2022-05-19

    • Update jemalloc to 5.3.0 (#23)

    0.4.3 - 2022-02-21

    • Added riscv64 support (#14)

    0.4.2 - 2021-08-09

    • Fixed prof not working under certain condition (#9) (#12)
    • Updated paste to 1 (#11)

    0.4.1 - 2020-11-16

    • Updated jemalloc to fix deadlock during initialization
    • Fixed failure of generating docs on release version

    0.4.0 - 2020-07-21

    • Forked from jemallocator master
    • Upgraded jemalloc to 5.2.1 (#1)
    • Fixed wrong version in generated C header (#1)
    • Upgraded project to 2018 edition (#2)
    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 2
  • Can you create a compatible flex `.y` file?

    Can you create a compatible flex `.y` file?

    Hi , I am learning flex and bison , and I is learned rust language. and I like rust become it is safte language. I look at src/test/calc.y file many times, it has some rust sentence, so I uncertain it can work with flex. Can you give me some help ?

    opened by tu6ge 1
  • Update pprof requirement from 0.10 to 0.11 in /scripts

    Update pprof requirement from 0.10 to 0.11 in /scripts

    Updates the requirements on pprof to permit the latest version.

    Changelog

    Sourced from pprof's changelog.

    [0.11.0] - 2022-11-03

    Changed

    • Upgrade prost 0.11 (#166)
    • Upgrade criterion from 0.3 to 0.4 (#163)

    Fixed

    • Restart syscalls interuppted by SIGPROF when possible (#167)
    • Only do per-frame-blocklist-check when frame-pointer is enabled (#172)

    [0.10.1] - 2022-08-29

    Changed

    • Update MAX_DEPTH to 128 (#159)

    Fixed

    • Fixed clippy warnnings and ignore prost mod (#160)

    [0.10.0] - 2022-06-27

    Changed

    • Remove backtrace-rs feature, as the default choice when not specified (#130)

    Added

    • Add sample_timestamp to Frames and UnresolvedFrames in order to have more fine-grained info on when the samples are collected (#133)

    Fixed

    • Export UnresolvedReport type to allow developers to get the unresolved report (#132)

    [0.9.1] - 2022-05-19

    Fixed

    • Protect the error number in signal handler (#128)

    [0.9.0] - 2022-05-09

    Added

    • Add frame-pointer feature to unwind the stack with frame pointer (#116)

    Changed

    • The user has to specify one unwind implementation (backtrace-rs or frame-pointer) in the features (#116)

    [0.8.0] - 2022-04-20

    Changed

    Fixed

    • Fix pthread_getname_np not available on musl (#110)

    ... (truncated)

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Update pprof requirement from 0.9 to 0.10 in /scripts

    Update pprof requirement from 0.9 to 0.10 in /scripts

    Updates the requirements on pprof to permit the latest version.

    Changelog

    Sourced from pprof's changelog.

    [0.10.0] - 2022-06-27

    Changed

    • Remove backtrace-rs feature, as the default choice when not specified (#130)

    Added

    • Add sample_timestamp to Frames and UnresolvedFrames in order to have more fine-grained info on when the samples are collected (#133)

    Fixed

    • Export UnresolvedReport type to allow developers to get the unresolved report (#132)

    [0.9.1] - 2022-05-19

    Fixed

    • Protect the error number in signal handler (#128)

    [0.9.0] - 2022-05-09

    Added

    • Add frame-pointer feature to unwind the stack with frame pointer (#116)

    Changed

    • The user has to specify one unwind implementation (backtrace-rs or frame-pointer) in the features (#116)

    [0.8.0] - 2022-04-20

    Changed

    Fixed

    • Fix pthread_getname_np not available on musl (#110)

    [0.7.0] - 2022-03-08

    Added

    • Add rust-protobuf support by adding protobuf-codec features (#106)

    Changed

    • protobuf feature is renamed to prost-codec to align all other tikv projects (#106)

    [0.6.2] - 2021-12-24

    Added

    • implement Clone for ProfilerGuardBuilder @​yangkeao
    • Add thread names and timing information to protobuf reports @​free

    [0.6.1] - 2021-11-01

    Added

    • blocklist to skip sampling in selected shared library @​yangkeao

    Fixed

    ... (truncated)

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Update pprof requirement from 0.7 to 0.9 in /scripts

    Update pprof requirement from 0.7 to 0.9 in /scripts

    Updates the requirements on pprof to permit the latest version.

    Changelog

    Sourced from pprof's changelog.

    [0.9.1] - 2022-05-19

    Fixed

    • Protect the error number in signal handler (#128)

    [0.9.0] - 2022-05-09

    Added

    • Add frame-pointer feature to unwind the stack with frame pointer (#116)

    Changed

    • The user has to specify one unwind implementation (backtrace-rs or frame-pointer) in the features (#116)

    [0.8.0] - 2022-04-20

    Changed

    Fixed

    • Fix pthread_getname_np not available on musl (#110)

    [0.7.0] - 2022-03-08

    Added

    • Add rust-protobuf support by adding protobuf-codec features (#106)

    Changed

    • protobuf feature is renamed to prost-codec to align all other tikv projects (#106)

    [0.6.2] - 2021-12-24

    Added

    • implement Clone for ProfilerGuardBuilder @​yangkeao
    • Add thread names and timing information to protobuf reports @​free

    [0.6.1] - 2021-11-01

    Added

    • blocklist to skip sampling in selected shared library @​yangkeao

    Fixed

    [0.6.0] - 2021-10-21

    Changed

    Security

    [0.5.0] - 2021-10-21

    Changed

    ... (truncated)

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Update pprof requirement from 0.4 to 0.5 in /tests

    Update pprof requirement from 0.4 to 0.5 in /tests

    Updates the requirements on pprof to permit the latest version.

    Commits
    • e237a5e bump version to v0.5.0
    • 26ba741 Bump prost* (#73)
    • 9e9e4e9 Bump actions-rs/cargo from 1 to 1.0.3 (#63)
    • 23073bb Bump actions-rs/toolchain from 1 to 1.0.7 (#64)
    • a3cd804 bump version to v0.4.4
    • 24baced add phantom data for criterion output (#68)
    • e17a0b0 modify document according to the current API (#62)
    • 6e67b56 bump version to v0.4.3
    • 35f81ba Adjust the output paths criterion::PProfProfiler uses to support benchmark ...
    • f5aef7b Update nix requirement from 0.19 to 0.20 (#56)
    • Additional commits viewable in compare view

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Update pprof requirement from 0.3 to 0.4 in /tests

    Update pprof requirement from 0.3 to 0.4 in /tests

    Updates the requirements on pprof to permit the latest version.

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Replace `Loc` with `Range<u32>`

    Replace `Loc` with `Range`

    By changing Loc to be

    pub struct Loc {
    	start: u32,  // changed "begin" to "start"
    	end: u32
    }
    

    you can actually swap out Loc altogether for Range<u32>. To avoid confusing situations when using Range<u32> as an iterator, Range<u32> does not implement Copy, so you will need to add .clone() in a few places. (The calls to clone are really just copies anyway, except requiring a call to clone makes it more explicit that a new Range<u32> is being created.)

    If client code wants to use its own custom location type, they came implement From<CustomLocType> for Range<u32> or have a to_range() method or something.

    I can make a PR if you'd like.

    opened by rljacobson 0
  • Redundent representation of Tokens/Terminals

    Redundent representation of Tokens/Terminals

    As far as I can tell, the tokens appear in the following ways:

    1. as constants on the SymbolKind struct
    2. wrapped in SymbolKind in the array SymbolKind::VALUES_ (which seems to just emulate an enum…?)
    3. as constants on the Lexer

    In the calculator example in the docs/test, the lexer and parser only needed to use the constants on Lexer, but more sophisticated projects might need tokens enumerated in the Value or Token types, which is another list.

    I think there is a simpler way. Suppose we instead have a Symbol enumeration that has all of the tokens as variants as in the following:

    // Use, e.g., the `enum-primitive-derive` crate for i32<->enum conversion.
    #[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Debug, Primitive)]
    #[repr(i32)]
    pub enum Symbol {
        YYEmpty,
        YYEOF,
        YYerror,
        YYUNDEF,
        //    ⋮     Whatever other "utility" variants are needed, 
        //    ⋮     so long as there are a statically known number of them.
        UserTerminalToken1,
        UserTerminalToken2,
        UserTerminalToken3,
        //    ⋮     All other terminal tokens the user declared in the spec file.
        UserTerminalTokenN,
        UserNonterminalSymbol1,
        UserNonterminalSymbol2,
        UserNonterminalSymbol3,
        //    ⋮     All other nonterminal symbols from the spec file.
        UserNonterminalSymbolM
    }
    

    This enum is generated but can be used by the lexer or whatever other code might need it. Also, simple translation/conversion functions would be generated as in the following:

    impl Symbol {
        pub fn yychar_value(&self) -> i32 {
            match self {
                Symbol::YYEmpty => -2,
                Symbol::YYEOF => 0,
                //    ⋮    Whatever other "special" values there are.
                Symbol::YYError => 256,
                Symbol::YYUndef => 257,
                other => (other as i32) - (Symbol::UserNonterminalSymbol1 as i32) + 258
                // This constant 258 should be statically known. It is the first token value for yychar.
            }
        }
    
        pub fn yytoken_value(&self) -> i32 {
            self as i32
        }
    
        /// The inverse of the `Symbol::yychar_value()` function.
        pub fn from_yychar(yychar: i32) -> Symbol {
            match yychar {
                -2 => Symbol::YYEmpty,
                //    ⋮    Whatever other "special" values there are.
                i if i < 256 => Symbol::YYUndef,
                256 => Symbol::YYError,
                257 => Symbol::YYUndef,
                i if i <= 256 + YYNTOKENS_ => Symbol::from_i32( i - 258 + (Symbol::UserNonterminalSymbol1 as i32) ).unwrap(),
                _ => Symbol::YYUndef
            }
        }
    
        pub fn name(&self) -> &'static str {
            yynames_[(self as i32) as usize]
        }
    }
    

    This has the advantages of:

    1. moving the consts in Lexer to a dedicated enum
    2. moving the consts in SymbolKind to a dedicated enum, eliminating the need for SymbolKind and all the SymbolKind::get() calls
    3. making the yychar variable redundant altogether, replacing each read of yychar with a yytoken::yychar_value(), for example
    4. making yytranslate_() and yytranslate_table_ unnecessary
    5. eliminates Lexer::TOKEN_NAMES (which I think is redundant anyway...?)

    I am not sure I have all the details correct in the code above, but it seems to me that something like this should work.

    opened by rljacobson 0
Owner
Ilya Bylich
Ilya Bylich
A simple to use rust package to generate or parse Twitter snowflake IDs,generate time sortable 64 bits unique ids for distributed systems

A simple to use rust package to generate or parse Twitter snowflake IDs,generate time sortable 64 bits unique ids for distributed systems (inspired from twitter snowflake)

houseme 5 Oct 6, 2022
Output the individual word-count statistics from a set of files

Output the individual word-count statistics from a set of files, or generate a curated word list

Johnny Tidemand Vestergaard 1 Apr 3, 2022
Rust crate for reading SER files used in astrophotography

Rust crate for reading SER files used in astrophotography.

Andy Grove 2 Oct 4, 2021
Generate Rust register maps (`struct`s) from SVD files

svd2rust Generate Rust register maps (structs) from SVD files This project is developed and maintained by the Tools team. Documentation API Minimum Su

Rust Embedded 518 Dec 30, 2022
A lean, minimal, and stable set of types for color interoperation between crates in Rust.

This library provides a lean, minimal, and stable set of types for color interoperation between crates in Rust. Its goal is to serve the same function that mint provides for (linear algebra) math types.

Gray Olson 16 Sep 21, 2022
A set of utilities to better enable polymorphic behavior in Rust

Polymorph A set of utilities to better enable polymorphic behavior in Rust. Introduction Rust is a wonderful language, with a strong emphasis on fast,

null 3 Mar 17, 2022
Rusty Armor Builds - Monster Hunter Rise Armor Set Creation Tool

RAB Rusty Armor Builds - Monster Hunter Rise Armor Set Creation Tool Armor files used by RAB

null 28 Oct 3, 2022
Femtorinth is a library to interact with a sub-set of the Modrinth API.

Femtorinth Femtorinth is a rust library to interact with a sub-set of the Modrinth api, it only includes the api calls that don't need auth (a.k.a onl

null 2 May 6, 2022
Stdto provides a set of functional traits for conversion between various data representations.

Stdto stdto provides a set of functional traits for conversion between various data representations. | Examples | Docs | Latest Note | stdto = "0.13.0

Doha Lee 5 Dec 21, 2022
Convert Juniper configurations to 'set-style'

JCC: Juniper Config Converter Convert Juniper configurations. Takes a Juniper configuration as displayed using show configuration and transforms it to

null 4 Sep 1, 2023
Common utilities code used across Fulcrum Genomics Rust projects

fgoxide Common utilities code used across Fulcrum Genomics Rust projects. Why? There are many helper functions that are used repeatedly across project

Fulcrum Genomics 2 Nov 2, 2022
Toolkit for working with scripts used by REDengine in Cyberpunk 2077.

redscript Toolkit for working with scripts used by REDengine in Cyberpunk 2077. Currently includes a compiler, a decompiler and a disassembler. usage

jac3km4 268 Jan 6, 2023
A VtubeStudio plugin that allows iFacialMocap to stream data to the app, enabling full apple ARkit facial tracking to be used for 2D Vtuber models.

facelink_rs A VtubeStudio plugin that allows iFacialMocap to stream data to the app, enabling full apple ARkit facial tracking to be used for 2D Vtube

Slashscreen 2 May 6, 2022
An implementation of the SMP protocol as used in zephyr, mcuboot, mcumgr, and more.

SMP An implementation of the SMP protocol in pure Rust. This repository contains: ./mcumgr-smp: A SMP library implementation to be used in your own pr

Gessler GmbH 3 Dec 10, 2023
Rust macro that uses GPT3 codex to generate code at compiletime

gpt3_macro Rust macro that uses GPT3 codex to generate code at compiletime. Just describe what you want the function to do and (optionally) define a f

Maximilian von Gaisberg 59 Dec 18, 2022
Generate bindings to use Rust code in Qt and QML

Rust Qt Binding Generator This code generator gets you started quickly to use Rust code from Qt and QML. In other words, it helps to create a Qt based

KDE GitHub Mirror 768 Dec 24, 2022
📝 Generate your README.md from Rust doc comments

cargo-onedoc ?? Generate README.md from doc comments. Only write your documentation once! This crate provides a Cargo subcommand that can generate Mar

Ross MacArthur 2 Dec 14, 2022
Rust library to generate word cloud images from text and images !

wordcloud-rs A Rust library to generate word-clouds from text and images! Example Code use std::collections::HashMap; use std::fs; use lazy_static::la

Teo Orthlieb 2 Dec 8, 2022
Generate an SPDX Software Bill of Materials for Rust crates.

cargo-spdx cargo-spdx is currently in development and not yet ready for use. cargo-spdx provides a cargo subcommand to generate an SPDX Software Bill

Andrew Lilley Brinker 13 May 18, 2023