Provides a way to use enums to describe and execute ordered data pipelines. 🦀🐾

Overview

enum_pipline

Provides a way to use enums to describe and execute ordered data pipelines. 🦀 🐾

CI Crates.io docs.rs

I needed a succinct way to describe 2d pixel map operations for a game I'm working on. I wanted callers to be able to easily determine all possible operations (hence enum), with per-operation data (hence variants), and their operation-specific logic (proc-macro coming soon). This is what I came up with!

println!("init"), Operations::Run(delta) => println!("do work") } } } fn do_work() { let my_op_pipeline = vec![ Operations::Init, Operations::Allocate(1.0, 1.0), Operations::Init, Operations::Run(1.0), ] .into_pipeline(); my_op_pipeline.execute(); // prints: // init // allocate something // init // do work } ">
use enum_pipeline::{
    Execute, IntoPipelineVec
};

enum Operations {
    Allocate(f32, f32),
    Init,
    Run(f32)
}

impl Execute for Operations {
    fn execute(self) {
        match self {
            Operations::Allocate(x, y) => println!("allocate something"),
            Operations::Init => println!("init"),
            Operations::Run(delta) => println!("do work")
        }
    }
}

fn do_work() {
    let my_op_pipeline = vec![
        Operations::Init,
        Operations::Allocate(1.0, 1.0),
        Operations::Init,
        Operations::Run(1.0),
    ]
    .into_pipeline();

    my_op_pipeline.execute();
    // prints:
    // init
    // allocate something
    // init
    // do work
}

There are variants for pipelines with global data as well (passed as an argument to execute), and I'm working on a proc-macro that can generate the boilerplate match logic, shelling out to different user provided functions for each operation.

TODO

  • finish the proc-macro stuff
  • document the proc-macro
  • add example directory

License

MIT

You might also like...
A high-performance, high-reliability observability data pipeline.

Quickstart • Docs • Guides • Integrations • Chat • Download What is Vector? Vector is a high-performance, end-to-end (agent & aggregator) observabilit

Rayon: A data parallelism library for Rust

Rayon Rayon is a data-parallelism library for Rust. It is extremely lightweight and makes it easy to convert a sequential computation into a parallel

Quickwit is a big data search engine.

Quickwit This repository will host Quickwit, the big data search engine developed by Quickwit Inc. We will progressively polish and opensource our cod

DataFrame / Series data processing in Rust

black-jack While PRs are welcome, the approach taken only allows for concrete types (String, f64, i64, ...) I'm not sure this is the way to go. I want

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, written in Rust

Datafuse Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture Datafuse is a Real-Time Data Processing & Analytics DBMS wit

A highly efficient daemon for streaming data from Kafka into Delta Lake

A highly efficient daemon for streaming data from Kafka into Delta Lake

A cross-platform library to retrieve performance statistics data.

A toolkit designed to be a foundation for applications to monitor their performance.

TensorBase is a new big data warehousing with modern efforts.
TensorBase is a new big data warehousing with modern efforts.

TensorBase is a new big data warehousing with modern efforts.

Fill Apache Arrow record batches from an ODBC data source in Rust.

arrow-odbc Fill Apache Arrow arrays from ODBC data sources. This crate is build on top of the arrow and odbc-api crate and enables you to read the dat

Comments
Releases(v2.0.0)
Owner
Ben Greenier
Hello! I build video games and software for creators @faster-games. Previously @WatchMixer, @microsoft. 👨‍💻🌈
Ben Greenier
Perhaps the fastest and most memory efficient way to pull data from PostgreSQL into pandas and numpy. 🚀

flaco Perhaps the fastest and most memory efficient way to pull data from PostgreSQL into pandas and numpy. ?? Have a gander at the initial benchmarks

Miles Granger 14 Oct 31, 2022
New generation decentralized data warehouse and streaming data pipeline

World's first decentralized real-time data warehouse, on your laptop Docs | Demo | Tutorials | Examples | FAQ | Chat Get Started Watch this introducto

kamu 184 Dec 22, 2022
A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

A Modern Real-Time Data Processing & Analytics DBMS with Cloud-Native Architecture, built to make the Data Cloud easy

Datafuse Labs 5k Jan 9, 2023
Provides multiple-dtype columner storage, known as DataFrame in pandas/R

brassfibre Provides multiple-dtype columner storage, known as DataFrame in pandas/R. Series Single-dtype 1-dimentional vector with label (index). Crea

Sinhrks 21 Nov 28, 2022
Rustic - a backup tool that provides fast, encrypted, deduplicated backups

Rustic is a backup tool that provides fast, encrypted, deduplicated backups. It can read the restic repo format desribed in the design document and writes a compatible repo format which can also be read by restic.

null 266 Jan 2, 2023
ConnectorX - Fastest library to load data from DB to DataFrames in Rust and Python

ConnectorX enables you to load data from databases into Python in the fastest and most memory efficient way.

SFU Database Group 939 Jan 5, 2023
AppFlowy is an open-source alternative to Notion. You are in charge of your data and customizations

AppFlowy is an open-source alternative to Notion. You are in charge of your data and customizations. Built with Flutter and Rust.

null 30.7k Jan 7, 2023
An example repository on how to start building graph applications on streaming data. Just clone and start building 💻 💪

An example repository on how to start building graph applications on streaming data. Just clone and start building ?? ??

Memgraph 40 Dec 20, 2022
Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing

Apache Arrow Powering In-Memory Analytics Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enabl

The Apache Software Foundation 10.9k Jan 6, 2023
High-performance runtime for data analytics applications

Weld Documentation Weld is a language and runtime for improving the performance of data-intensive applications. It optimizes across libraries and func

Weld 2.9k Dec 28, 2022