Scripting language focused on processing tabular data.



Welcome to the ogma project! ogma is a scripting language focused on ergonomically and efficiently processing tabular data, with batteries included. Mixing aspects of terminal shells and functional programming, the ogma project lets one interact with data in a refreshing way. The language is syntactically lightweight yet boasts powerful constructs designed to efficiently work with tabular data.

Getting Started

Language Characteristics

ogma takes inspiration from multiple sources. For the semantics, programming languages Rust, Haskell, ML, and Elm have all been an influence, while the syntax is derived primarily from terminal shells (with smatterings from other languages). Some major characteristics of ogma are:

  • small language with few keywords and opting for a prefix notation,
  • uses pipelines to chain together commands, composing their effects,
  • it is strictly typed,
  • can be extended with user-defined implementations and types.

Development and Support

The ogma project needs development and financial support to help keep the project growing. Financial support in the form of sponsorship is greatly appreciated, helping us spend more time on the project. The project is also open source, hosted on Github. Contributions are encouraged, not just for features but important aspects such as bug fixes and documentation. There is also a forum in which to ask and answer questions. The forum is a great way to cultivate a community around the project and it is encouraged to participate.


Pull requests are appreciated and encouraged! The request will be subject to a review and will need to pass the CI before being merged in. Please ensure:

  • Tests are added where necessary,
  • cargo fmt -- --check passes,
  • cargo clippy -- -D warnings passes,
  • cargo test passes,
  • An item describing your pull request is added to

Happy coding!

Release Process

When a release is ready, simply create a release tag and push it to Github. The release workflow will take care of the build and release creation. The release body uses as the release notes. Be sure to update this before the tag push!

# Update
git push
git tag v#.#
git push origin v#.#
  • Fix some markdown issues

    Fix some markdown issues

    There are several things yet to be considered:

    • [ ] Every file in docs/ starts with

      <iframe src="./.ibox.html?raw=true" style="border:none; position:fixed; width:40px; right:0; z-index=999;"></iframe>

      MDL doesn't like that because:

      • It's inline HTML
      • The first line should be a top-level heading

      Maybe the documentation renderer could add that?

    • [ ] docs/15 Examples/15.3 Calculating has an embedded YouTube video:

      <iframe title="YouTube video player" src=""
      width="560" height="315" frameborder="0" allowfullscreen
      allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"></iframe>

      which MDL doesn't like as it's inline HTML Daedalus seems to already support embedded local videos, maybe it could also have YouTube/Vimeo/etc. support? Although this is a Daedalus issue.

    • [ ] Every second-level heading (## Something) is followed by a ---, for example docs/01 (permalink):

      ## Characteristics

      Maybe this could be done by easy CSS like in GitHub? A bit modified example:

      h2 {
          padding-bottom: .3em;
          border-bottom: 1px solid gray;

      But there's one exception which confuses me in docs/05 Syntax and Semantics/5.1

      ## Sub-expressions without the braces
      There exists a shorthand for sub-expressions which can be used as the **_last_** argument. The  

      It's the only place where h2 isn't followed by an hr.

    could this also be hacktoberfest-accepted?

    opened by thecaralice 10
  • [TASK] Alter `Expecting` enum to be a bitflag

    [TASK] Alter `Expecting` enum to be a bitflag


    Alter the Expecting enum from parsing to use bitflags, indicating it could be composed of more than a single variant.

    good first issue refactor 
    opened by kurtlawrence 5
  • [BUG] Inferrence depth issue

    [BUG] Inferrence depth issue

    Current Behaviour

    open executions-lt.csv | filter mean = -1 | filter conf != 1.01 | grp coin  | append --fiat { get value | fold 0 { + $row.sell-fiat | - $ } } | append --trades { get value |:Table len } | append --avg { get value | let {len} $l | fold 0 + $row.ret-pct | / $l }

    Altering the last get value to get:Table value it compiles.

    Expected Behaviour

    Reproduction Steps

    Additional information

    opened by kurtlawrence 1
  • [TASK] Mandatory `def` input types

    [TASK] Mandatory `def` input types


    Constrain the type system such that all defs have a known input type.


    More robust type inference is the major goal here. By knowing the set of concrete types that a def is expecting, only that set of types has to be tested.


    Agnostic defs

    Many defs are input agnostic. This will still be a requirement. The proposed solution is to force the input to be Nil. Unfortunately, this will break how the current type inference flows types. The convention will break certain patterns, for example:

    # This would previously work, and list out along folder `foo-bar`.
    \ 'foo' | ls { + '-bar' }
    # Now the concatenation would have to occur sooner, be bound to a variable, and then passed as an argument.
    \ 'foo' | + '-bar' | let $x | ls $x

    Since this pattern becomes unwieldy with the change, it might be worthwhile adding a definition which can handle this:

    \ 'foo' | + '-bar' | zog ls $z
                         ^^^ --- ^ a def which stores input in $z and calls `ls` with Nil input

    Another approach would be to consider defs with Nil input types to be agnostic. It is rare to actually switch input to Nil to call a def, so rather than having a distinction between Nil and anything, combine the two and add a special variable (or literal?) which can be populated with the output of the previous block. This approach may grate with the goal of reducing type inference complexity. The issue is the coercion to any output type, it makes the type inference unconstrained.

    def foo Nil (len:Num) { \ $len | + 2 }
    ls | foo { \#i | len }

    Maybe a combination?

    To keep the input type as a constraint, a defining characteristic of the call needs to be present. The input def (\) would be a good candidate. Parsing would need to be altered to such that trailing characters are taken as a def identifier.

    def foo Nil (len:Num) { \ $len | + 2 }
                             ^ a space is now necessary
    ls | \foo { \ $prev | len }
         ^^^^     ^^^^ store output of previous block in `$prev`
         `foo` is known to be called with `Nil`.
    # Another way to write this
    ls | len | \foo $prev

    The let def

    The let def is special, since it is required to be general.

    • [x] #109
    • [x] #110
    • [ ] #111
    • [ ] #112
    opened by kurtlawrence 1
  • [BUG] `fold` runtime error, not compile error

    [BUG] `fold` runtime error, not compile error

    This should not 'compile' since the output of the fold is a string, but it is expecting a tuple.

    Instead, it generates a runtime error. This is a bug, needs to fail compilation.

    # Convert a number to hexadecimal string with `pad` padding.
    def to-hex Num (pad:Num) {
        let $n | range 0 $pad | fold {Tuple '' $n} {
            let #i.t0 $s #i.t1 $n | \$n | mod 16
            | if {=0} '0' {=1} '1' {=2} '2' {=3} '3' {=4} '4' {=5} '5' {=6} '6' {=7} '7'
                 {=8} '8' {=9} '9' {=10} A {=11} B {=12} C {=13} D {=14} E F
            | + $s | rev
    opened by kurtlawrence 1
  • [TASK] come say hi?

    [TASK] come say hi?


    Hi - I'm JT and I work on the Nushell project. We just found out about this project, very cool! Are interested in coming to say "hi" on the Nushell discord, by chance? We've got a channel called #external-collab where folks interested in this area can collaborate with each other, brainstorm ideas, etc.

    opened by jntrnr 1
  • [TASK] Walk fs and parse `.ogma` files

    [TASK] Walk fs and parse `.ogma` files


    Build a structure by walking filesystem hierarchy for *.ogma files.

    Aspects to Develop

    • Structure could be BTreeMap<PathBuf, Vec<File>>
    • PathBuf is used as partition name/path
    • Clashes with name.ogma and name/foo.ogma should error out with informative error message
    • Set up a partition test hierarchy which tests all this things
    opened by kurtlawrence 0
  • [TASK] Add a _file_ parser which can split an ogma file into its constituents

    [TASK] Add a _file_ parser which can split an ogma file into its constituents


    For use with partitions, but could also be used with the batch system. Parse a file into a list of directives, list of types, list of impls, and list of exprs. The types, impls, and exprs can all support doc comments.

    Aspects to Develop

    • [x] Develop parser
    • [ ] Replace batch parser with this parser
    opened by kurtlawrence 0
  • [TASK] Initialise `defs2::Definitions` with internal items

    [TASK] Initialise `defs2::Definitions` with internal items


    Similar to the current Definitions, initialise ogma defined defs. Will have to go through the partitions system to add these items to the root.

    Aspects to Develop

    • [ ]
    opened by kurtlawrence 0
  • [TASK] Resolve partition imports

    [TASK] Resolve partition imports


    Given a set of imports, resolve all the dependencies. Resolving means to parse/add and make available in the Defintions.

    Aspects to Develop

    • Lazily parse/add dependencies
    • Completely based off import directives
    • Globs may require parsing/add all items (depending on glob matches)
    • Probably use a stack based system
    • Will need cycle detection
      • foo imports bar which imports foo
    opened by kurtlawrence 0
  • [TASK] Convert walked fs hierarchy into partition graph

    [TASK] Convert walked fs hierarchy into partition graph


    Using the structure build from #175, add each item into the partition graph structure.

    Aspects to Develop

    • Testable in memory
    • Tests for item name clashes within a partition
    • Items do not need to be parsed/compiled, this can be done lazily (however the code snippets would need to be stored)
    • Check for partition name validity
    opened by kurtlawrence 0
  • [TASK] Rework how inline nested defs are detected

    [TASK] Rework how inline nested defs are detected


    Currently, defs are detected at the parsing stage, using the known defs list. With the introduction of partitions and lazy parsing of those partitions, this quickly becomes tricky to determine which def needs to get parsed first.

    A more structured approach to the parsing may be required, one which scans the partition tree first, setting up pointers to potential defs (fairly easy with the def keyword).

    Ultimately, thought needs to go into how much of the partition tree needs to be fully parsed and memoised. I would prefer a lazy approach which only parses the referenced defs, with some caching to avoid continually hitting the disk.

    opened by kurtlawrence 0
  • [TASK] Rework completion expecting matching

    [TASK] Rework completion expecting matching


    If #169 gets merged, rework the completions code to match on whether the expecting contains a flag, rather than value matching with the flags themselves.

    good first issue refactor 
    opened by kurtlawrence 0
  • [BUG] Should error with too many arguments

    [BUG] Should error with too many arguments

    \ 2 | foo 'hasdfg' ┃4 ┃>> \ 2 | foo 3 ┃4 ┃>> foo --help ┃ ┃Help: foo ┃ ┃--> shell:0 ┃ ┃ | ---- Input Type: ---- ┃ ┃ | user defined implementation in shell ┃ ┃ | def foo (a) { + $a } ┃ ┃ | ┃ ┃ | Usage: ┃ ┃ | => foo a ┃ ┃ | ┃ ┃ | ---- Input Type: Number ---- ┃ ┃ | user defined implementation in shell ┃ ┃ | def foo Num (a) { + 2 } ┃ ┃ | ┃ ┃ | Usage: ┃ ┃ | => foo a ┃ ┃ ┃ ┃>> def foo Num (a) { + 2 } ┃ ┃() ┃ ┃>> \ 'Hello' | foo ' Jason' ┃ ┃Hello Jason ┃ ┃>> \ 2 | foo 3 ┃ ┃5 ┃ ┃>> def foo (a) { + $a } ┃ ┃()

    opened by kurtlawrence 0
  • v0.5(Jul 31, 2022)

    πŸ›‘ Breaking Changes

    πŸ”¬ New Features

    • Improve help messages for defs with multiple input types (
    • The shell supports themes and transparent terminal backgrounds (

    πŸ› Bug Fixes

    • Fix variable type inferencing when passing variables to defs (
    • Detect trailing command in let and suggest a pipe (
    • Fix error where stronger type guarantees were present (
    • Fix def not inferring correct input in last (
    • Fix locals graph needs updating bug (
    • Provide more verbose parsing errors (
    • Fix CPU spinning with completion prompt open (
    • Fix an uncommon variable shadowing bug by reworking the variable sealing system (

    ✨ Other Updates

    • ogma crate API documentation is now published at (
    • Improve help messages and back end def input type matching (
    • Introduce TypesSets into the inferencer (
    • Fully move to TypesSets for more robust type inference (
    Source code(tar.gz)
    Source code(zip)
    linux.tar.gz(2.12 MB) MB)
  • v0.4(May 7, 2022)

    πŸ›‘ Breaking Changes

    • [to-str now defaults to full number formatting with optional formatting string (
    • Output inference is somewhat smarter for variadic cmds. ( It does place stricter typing constraints on + - * / min max.

    πŸ”¬ New Features

    • #b -- Newline constant (b for break) (
    • filter now accepts Str input: it will supply one character at a time to the predicate. (

    πŸ› Bug Fixes

    • Use a BufWriter around a File to improve save performance. (
    • Flush Writer once finished writing. (
    • Fixes ( with new compilation engine.
    • Output better error messages for unmatched commands (
    • Dot operator now infers output type when used with TableRows. (
    • Unscoped variables do not return a internal compiler error (

    ✨ Other Updates

    Source code(tar.gz)
    Source code(zip)
    linux.tar.gz(2.04 MB) MB)
  • v0.3(Apr 8, 2022)

    πŸ”¬ New Features

      • Major new feature which utilises a new compiler system, providing type inference for arugments.

    πŸ› Bug Fixes

    • The new compiler fixes a variable unsoundness issue (

    ✨ Other Updates

    • Modularise the impls module
    • Modularise the testing and extract it into an API test
    Source code(tar.gz)
    Source code(zip)
    linux.tar.gz(2.02 MB) MB)
  • v0.2(Jan 15, 2022)

    This release fixed mostly bugs and some crate maintenance. The typify command was introduced, paving the way for type inferencing.

    πŸ”¬ New Features

    πŸ› Bug Fixes

    • Handle BOM:
    • Binary now builds as ogma rather than ogma-bin:
    • Remove panic when batch directives are parsed:

    ✨ Other Updates

    Source code(tar.gz)
    Source code(zip)
    linux.tar.gz(1.84 MB) MB)
Lisp dialect scripting and extension language for Rust programs

Ketos Ketos is a Lisp dialect functional programming language. The primary goal of Ketos is to serve as a scripting and extension language for program

Murarth 721 Dec 12, 2022
Rhai - An embedded scripting language for Rust.

Rhai - Embedded Scripting for Rust Rhai is an embedded scripting language and evaluation engine for Rust that gives a safe and easy way to add scripti

Rhai - Embedded scripting language and engine for Rust 2.4k Dec 29, 2022
A static, type inferred and embeddable language written in Rust.

gluon Gluon is a small, statically-typed, functional programming language designed for application embedding. Features Statically-typed - Static typin

null 2.7k Dec 29, 2022
Source code for the Mun language and runtime.

Mun Mun is a programming language empowering creation through iteration. Features Ahead of time compilation - Mun is compiled ahead of time (AOT), as

The Mun Programming Language 1.5k Jan 9, 2023
Implementation of Immix Mark-Region Garbage collector written in Rust Programming Language.

libimmixcons Implementation of Immix Mark-Region Garbage collector written in Rust Programming Language. Status This is mostly usable library. You can

playX 34 Dec 7, 2022
A computer programming language interpreter written in Rust

Ella lang Welcome to Ella lang! Ella lang is a computer programming language implemented in Rust.

Luke Chu 64 May 27, 2022
Oxide Programming Language

Oxide Programming Language Interpreted C-like language with a Rust influenced syntax. Latest release Example programs /// recursive function calls to

Arthur Kurbidaev 113 Nov 21, 2022
The hash programming language compiler

The Hash Programming language Run Using the command cargo run hash. This will compile, build and run the program in the current terminal/shell. Submit

Hash 13 Nov 3, 2022
Interpreted language developed in Rust

Xelis VM Xelis is an interpreted language developed in Rust. It supports constants, functions, while/for loops, arrays and structures. The syntax is s

null 8 Jun 21, 2022
Interactive interpreter for a statement-based proof-of-concept language.

nhotyp-lang Nhotyp is a conceptual language designed for ease of implementation during my tutoring in an introductive algorithmic course at Harbin Ins

Geoffrey Tang 5 Jun 26, 2022
πŸ– ham, general purpose programming language

?? ham, a programming language made in rust status: alpha Goals Speed Security Comfort Example fn calc(value){ if value == 5 { return 0

Marc EspΓ­n 19 Nov 10, 2022
A small programming language created in an hour

Building a programming language in an hour This is the project I made while doing the Building a programming language in an hour video. You can run it

JT 40 Nov 24, 2022
The Loop programming language

Loop Language Documentation | Website A dynamic type-safe general purpose programming language Note: currently Loop is being re-written into Rust. Mea

LoopLanguage 20 Oct 21, 2022
Stackbased programming language

Rack is a stackbased programming language inspired by Forth, every operation push or pop on the stack. Because the language is stackbased and for a ve

Xavier Hamel 1 Oct 28, 2021
REPL for the Rust programming language

Rusti A REPL for the Rust programming language. The rusti project is deprecated. It is not recommended for regular use. Dependencies On Unix systems,

Murarth 1.3k Dec 20, 2022
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023
Concurrent and multi-stage data ingestion and data processing with Rust+Tokio

TokioSky Build concurrent and multi-stage data ingestion and data processing pipelines with Rust+Tokio. TokioSky allows developers to consume data eff

DanyalMh 29 Dec 11, 2022
Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

Apache Arrow Powering In-Memory Analytics Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enabl

The Apache Software Foundation 10.9k Jan 6, 2023
frawk is a small programming language for writing short programs processing textual data

frawk frawk is a small programming language for writing short programs processing textual data. To a first approximation, it is an implementation of t

Eli Rosenthal 1k Jan 7, 2023