A CI inspired approach for local job automation.

Overview

nauman

Crates.io License CI

A CI inspired approach for local job automation.

FeaturesInstallationUsageFAQExamplesJob Syntax

About

nauman is an easy-to-use job automation tool. It arose from a necessity to automate complex task flows while still preserving the ability to monitor and debug them.

It is heavily inspired by simplicity of Github Actions, flexibility of Fastlane and extensibility of Apache Airflow. This tool aims to bring the best of both to local job automation.

Quick Start

See Installation for how to install just on your computer. Try running nauman --version to make sure that it’s installed correctly.

Once nauman is installed and working, create a job file named hello-world.yml in the root of your project with the following contents:

name: Hello World!

tasks:
  - name: Hello World!
    run: echo "Hello World!"
  - name: Greeting
    run: echo "Greetings ${USER}!"

When you invoke nauman hello-world.yml it runs the job tasks in the order they are listed in the file. The output should be as follows:

--------------------------
--- Task: Hello World! ---
--------------------------
$ echo "Hello World!"
Hello World!
----------------------
--- Task: Greeting ---
----------------------
$ echo "Greetings ${USER}!"
Greetings egordm!

nauman prints the output of each task to the console. The defined tasks run within your default shell and capture all of their output.

(back to top)

Examples

For more examples, see the examples directory.

Using Hooks

Hooks are first class citizens in nauman. They represent various events and callbacks that can occur during the execution of a job.

Let's take a look at a simple use case of hooks to add health checks to a job and its tasks. Create a file named health-checks.yml in the root of your project with the following contents:

name: Example Job Using Health Checks
policy: always

tasks:
  - name: Run a successful program
    run: sleep 2 && echo "Success!"
    hooks:
      on_success:
        - run: curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/fb4c4863-a7f1-44f1-8298-3baabec653d4
  - name: Run a failing program
    run: sleep 2 && exit 1
    hooks:
      on_success:
        - run: curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/0178d446-9b50-4158-b50d-7df098945c81
      on_failure:
        - name: Send failing status code to Health Check
          run: curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/0178d446-9b50-4158-b50d-7df098945c81/$NAUMAN_PREV_CODE

hooks:
  after_job:
    - name: On completion of the job, ping a health check
      run: curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/fb4c4863-a7f1-44f1-8298-3baabec653d4

When you invoke nauman health-checks.yml it runs all the tasks withing the job file despite the fact that the second task fails (see Execution Policy: always). See the output below:

--------------------------------------
--- Task: Run a successful program ---
--------------------------------------
$ sleep 2 && echo "Success!"
Success!
-------------------------------------------------------------------------------------------------------------
--- Hook: curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/fb4c4863-a7f1-44f1-8298-3baabec653d4 ---
-------------------------------------------------------------------------------------------------------------
$ curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/fb4c4863-a7f1-44f1-8298-3baabec653d4
-----------------------------------
--- Task: Run a failing program ---
-----------------------------------
$ sleep 2 && exit 1
Task "Run a failing program" completed in 2s with a non-zero exit status: 1. This indicates a failure
------------------------------------------------------
--- Hook: Send failing status code to Health Check ---
------------------------------------------------------
$ curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/0178d446-9b50-4158-b50d-7df098945c81/$NAUMAN_PREV_CODE
-----------------------------------------------------------
--- Hook: On completion of the job, ping a health check ---
-----------------------------------------------------------
$ curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/fb4c4863-a7f1-44f1-8298-3baabec653d4

On success of the first task, a success hook is executed which sends a health-check. On failure of the second task, a fail hook is executed sending a failure health-check. Finally, an after job hook is executed sending a job completion health-check.

(back to top)

Logging

Logging is a powerful feature of nauman that allows you to log the output of your tasks and hooks to different output streams.

Create a file named logging.yml in the root of your project with the following contents:

&2 logging: - type: file name: Print stdout to a file stdout: true stderr: false output: ./stdout.log - type: file name: Print stderr to a file stdout: false stderr: true output: ./stderr.log - type: file name: Print both stdout and stderr to separate files per task split: true output: ./separate_logs - type: console">
name: Example Job Using Logs
options:
  log_dir: ./logs

tasks:
  - name: Print Hello World to stdout
    run: echo "Hello World!"
  - name: Print Hello World to stderr
    run: echo "Hello World!" >&2

logging:
  - type: file
    name: Print stdout to a file
    stdout: true
    stderr: false
    output: ./stdout.log
  - type: file
    name: Print stderr to a file
    stdout: false
    stderr: true
    output: ./stderr.log
  - type: file
    name: Print both stdout and stderr to separate files per task
    split: true
    output: ./separate_logs
  - type: console

Run nauman logging.yml and see the output below:

-----------------------------------------
--- Task: Print Hello World to stdout ---
-----------------------------------------
$ echo "Hello World!"
Hello World!
-----------------------------------------
--- Task: Print Hello World to stderr ---
-----------------------------------------
$ echo "Hello World!" >&2
Hello World!

Additionally, the following files are created:

  • logs/logging_2021-12-05T18:11:14/
    • separate_logs/
      • 000_print-hello-world-to-stdout.log
      • 001_print-hello-world-to-stderr.log
    • stderr.log
    • stdout.log

Where the logs if the specified root directory for the logs (See log_dir in Logging for more details). All the logs are placed in an logging_ subdirectory with the current date and time of the job run. stdout.log and stderr.log are created for each log stream. separate_logs/ is created for each task and contains the stdout and stderr logs for that task.

(back to top)

Using Environment Variables

Environment Variables allow you to set environment variables for your job. There are multiple ways to set environment variables:

  • As system environment variables: KEY=VALUE nauman
  • As cli arguments: nauman -e KEY=VALUE
  • As job configuration: .env.KEY: VALUE
  • As task configuration: .tasks. .env.KEY: VALUE

By creating env-vars.yml in the root of your project with the following content you can test them all:

name: Example Environment variables

env:
  PING_CMD: curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/
  CHECK_1: fb4c4863-a7f1-44f1-8298-3baabec653d4

tasks:
  - name: Job env var
    run: echo $PING_CMD$CHECK_1
  - name: Task env var
    run: echo $PING_CMD$CHECK_1
    env:
      CHECK_1: fb4c4863-a7f1-44f1-8298-3baabec653d4
  - name: System env var
    run: echo $PING_CMD$CHECK_2
  - name: Built-in env vars
    run: echo "Previous task \"$NAUMAN_PREV_NAME\" finished with status $NAUMAN_PREV_CODE"

When you run nauman env-vars.yml -e CHECK_2=0178d446-9b50-4158-b50d-7df098945c81 you will see the following output:

-------------------------
--- Task: Job env var ---
-------------------------
$ echo $PING_CMD$CHECK_1
curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/fb4c4863-a7f1-44f1-8298-3baabec653d4
--------------------------
--- Task: Task env var ---
--------------------------
$ echo $PING_CMD$CHECK_1
curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/fb4c4863-a7f1-44f1-8298-3baabec653d4
----------------------------
--- Task: System env var ---
----------------------------
$ echo $PING_CMD$CHECK_2
curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/0178d446-9b50-4158-b50d-7df098945c81
-------------------------------
--- Task: Built-in env vars ---
-------------------------------
$ echo "Previous task \"$NAUMAN_PREV_NAME\" finished with status $NAUMAN_PREV_CODE"
Previous task "System env var" finished with status 0

In the last task we can see that the NAUMAN_PREV_NAME and NAUMAN_PREV_CODE environment variables are used. These variables are set by the nauman based on the previous task. See Environment Variables for more context specific environment variables.

(back to top)

Features

Hook everything

You can create hooks for all the possible outcomes and events of your job or your task. Create job or task-local hooks like this:

tasks:
  ...
  - name: My Task
    hooks:
      on_failure:
        ...
      on_success:
        ...
      before_task:
        ...
      after_task:
        ...

hooks:
  before_job:
    ...
  after_job:
    ...
  on_failure:
    ...
  on_success:
    ...
  before_task:
    ...
  after_task:
    ...

(back to top)

Flexible Logging

You can log to single or multiple files, to console and even choose which log streams to used (stdout, stderr, or both).

logging:
  - name: Log only stdout
    type: file
    stdout: true
    stderr: false
    output: ./stdout.log
  - name: Logs split in files per task
    type: file
    stdout: true
    stderr: true
    split: true
    output: ./per_task_logs
  - name: Logs to console
    type: console
    stdout: true
    stderr: true
  - name: Append output to a shared file
    type: file
    stdout: true
    stderr: true
    output: /var/log/nauman/my_job.log

(back to top)

Context variables

Define more flexible tasks by using context variables.

Currently, following context variables are supported:

  • NAUMAN_JOB_NAME - Name of the job
  • NAUMAN_JOB_ID - ID of the job
  • NAUMAN_TASK_NAME - Name of the current task
  • NAUMAN_TASK_ID - ID of the current task
  • NAUMAN_PREV_NAME - Name of the previous task
  • NAUMAN_PREV_ID - ID of the previous task
  • NAUMAN_PREV_CODE - Exit code of the previous task
tasks:
  ...
  - name: Use context vars as env vars
    run: echo $NAUMAN_TASK_NAME

(back to top)

Configurable task plan

When one task fails it does not stop the whole job. You can configure the task execution plan to decide how to proceed.

You can choose between the following options:

  • always - Always execute the task regardless of prior task status.
  • prior_success - Execute the task only if prior task has succeeded.
  • no_prior_failed - Execute the task only if no other task has failed.
# Policy can be defined at job level
policy: no_prior_failed

tasks:
  ...
  - name: Always run this task
    # And overridden at task level
    policy: always

(back to top)

Different shell types

Aside from the default sh shell you can use bash, python, ruby, php or specify path to your own desired shell.

# Specify a default shell
shell: bash
shell_path: /bin/bash

tasks:
  ...
  - name: Python task
    shell: python
    run: print('Hello World!')
  - name: Virtual env python
    shell: python
    shell_path: '/app/venv/bin/python'
    run: print('Hello World!')
  - name: Ruby task
    shell: ruby
    run: print('Hello World!')
  - name: PHP task
    shell: php
    run: echo 'Hello World!';

(back to top)

Dry run

Want to make sure that your job is configured correctly? You can run your job in dry run mode. This will verify that all tasks are syntactically correct, all shells are usable and warn you about any potential issues (such as missing directories).

nauman --dry-run my_job.yml

(back to top)

Task Outputs

During the execution of every task, a temporary file is created where you can store the output variables. These files are automatically deleted after the task is finished. The variables specified in the output files will be loaded into the global context as environment variables.

The output file accepts dotenv style syntax.

> "$NAUMAN_OUTPUT_FILE" - name: Use the output variable run: echo $foo">
tasks:
  ...
  - name: Append output to the output file
    run: echo "foo=bar"  >> "$NAUMAN_OUTPUT_FILE"
  - name: Use the output variable
    run: echo $foo

(back to top)

Multiline commands

Sometimes commands can take up more space than a single line. You can use multiline strings to define your commands.

tasks:
  ...
  - name: Multiline
    shell: python
    run: |
      import os
      print(os.environ['NAUMAN_TASK_NAME'])

(back to top)

Dotenv files

You can use dotenv files to define variables for your tasks.

options:
  dotenv: /path/to/my_env.env

(back to top)

Change your working directory

You can change your working directory by using the cwd option.

cwd: /my/project/dir

tasks:
  ...
  - name: Change working directory to /my/project/dir/task1
    cwd: ./task1
    run: pwd

(back to top)

FAQ

Why use nauman?

Picture this: you want to periodically run your tool that syncs your favorite movies between services. This can be done with a cron job, but what if you want, to add more dependent tasks (like, also syncing your movie collections)? Easy, create a shell script that runs them both.

Now you want to keep track of their output (for debugging), you want to add health-checks, single process locking, etc. Shell scripts are not the best way to do this and can easily get very messy.

With nauman you can create and run a job file that covers it all in a readable and maintainable way.

Additionally nauman is written in Rust and can be installed bloat free onto any system as a simple binary. (See Installation for more details).

When not to use nauman?

You should not use nauman for tasks where you need:

  • A makefile:
    • nauman is not meant to be a replacement for makefiles.
    • It is meant to run a job to automate one single chain of tasks.
    • It does not support task parallelism, recursion or other complex workflows.
  • A data automation tool:
    • nauman is not meant to be a replacement for data automation tools.
    • It can be used to chain multiple data processing tasks together.
    • But it does not provide anything for data loading, data processing or visualization.
  • A CI tool:
    • nauman is not meant to be a replacement for CI tools.
    • It does not include any CI-specific features such as caching, build uploads or integrations with build tools.

(back to top)

Installation

The binary name for nauman is nauman.

Archives of precompiled binaries for nauman are available for Windows, macOS and Linux. Linux and Windows binaries are static executables. Users of platforms not explicitly mentioned below are advised to download one of these archives.

If you're a Rust programmer, nauman can be installed with cargo.

  • Note that numane is tested with Rust 1.57.0, although nauman may work with older versions.
  • Note that the binary may be bigger than expected because it contains debug symbols. This is intentional. To remove debug symbols and therefore reduce the file size, run strip on the binary.
$ cargo install nauman

Building from source

nauman is written in Rust, so you'll need to grab a Rust installation in order to compile it. nauman compiles with Rust 1.57.0 (stable) or newer. In general, nauman tracks the latest stable release of the Rust compiler.

To build nauman:

$ git clone https://github.com/EgorDm/nauman
$ cd nauman
$ cargo build --release
$ ./target/release/nauman --version

(back to top)

Usage

The usual way to invoke nauman is to use the nauman command. If you want to specify more options or to override some job settings, refer to the below full usage:

USAGE:
    nauman [OPTIONS] 


  ARGS:
    
  
       Path to job yaml file


  OPTIONS:
        
  --ansi 
  
                   Include ansi colors in output (default: true)
        
  --dry-run 
  
             Dry run to check job configuration (default: false)
    
  -e 
  
                            List of env variable overrides
    
  -h, 
  --help                       Print help information
    
  -l, 
  --level 
  
                 A level of verbosity, and can be used multiple times (default:
                                     info) [possible values: debug, info, warn, error]
        
  --log-dir 
  
             Directory to store logs in (default: current directory)
        
  --system-env 
  
       Whether to use system environment variables (default: true)
    
  -V, 
  --version                    Print version information

 

(back to top)

Job Syntax

Alternatives

If this is not what you are looking for, check out these cool alternatives:

  • Bash or Makefile
  • just - is a handy way to save and run project-specific commands
  • fastlane - is a tool for iOS and Android developers to automate tedious tasks like generating screenshots, dealing with provisioning profiles, and releasing your application
  • Apache Airflow - is a platform created by the community to programmatically author, schedule and monitor workflows.

(back to top)

TODO

  • Add support for .env files
  • Add more tests
  • Add a way to natively run web requests
  • Add a way to write outputs of different tasks
  • Add a templating system
  • Add a way to specify per log whether ansi is enabled or not
  • Add flock support
  • Always add console logging (only specify whether stdout and stderr should be logged)

Contributing

As this is a hobby project, contributions are very welcome!

The easiest way for you to contribute right now is to use nauman, and see where it's lacking.

If you have a use case nauman does not cover, please file an issue. This is immensely useful to me, to anyone wanting to contribute to the project, and to you as well if the feature is implemented.

If you're interested in helping fix an existing issue, or an issue you just filed, help is appreciated.

See CONTRIBUTING for technical information on contributing.

(back to top)

License

This project is licensed under the terms of the MIT license. See the LICENSE file.

(back to top)

You might also like...
A git command to quickly save your local changes in case of earthquake !

git-eq (aka git earthquake) Earthquakes are part of the daily life in many countries like in Taiwan. git-eq is a simple git command to quickly save yo

By mirroring traffic to and from your machine, mirrord surrounds your local service with a mirror image of its cloud environment.
By mirroring traffic to and from your machine, mirrord surrounds your local service with a mirror image of its cloud environment.

mirrord lets you easily mirror traffic from your Kubernetes cluster to your development environment. It comes as both Visual Studio Code extension and

Copy files from Git repository to local.

gitcp Copy files from Git repository to local. Install We are planning to add some installers support in the future. e.g. homebrew winget debian packa

A clock app in terminal written in Rust, supports local clock, timer and stopwatch.
A clock app in terminal written in Rust, supports local clock, timer and stopwatch.

clock-tui (tclock) A clock app in terminal. It support the following modes: Clock Timer Stopwatch Countdown Usage Install Install excutable by cargo:

Bolik Timeline is local-first software for keeping notes and files.

Bolik monorepo Bolik Timeline is local-first software for keeping notes and files. This repo contains alpha-quality software. This means that we are e

Share clipboard between machines on your local network.

Clipshare Do you ever have to work on multiple machines? Do you ever used your Github™ Gists just to send some text between then? Clipshare is here to

Local-first high performance codebase index engine designed for AI

CodeIndex CodeIndex is a local-first high performance codebase index engine designed for AI. It helps your LLM understand the structure and semantics

Convert local CAN log files to "routes" suitable for Cabana

Make Cabana Route Utility that takes CSV formatted CAN log files and (optionally) accompanying videos, convert them to "routes" that can be opened in

An LLM-powered (CodeLlama or OpenAI) local diff code review tool.

augre An LLM-powered (CodeLlama or OpenAI) local diff code review tool. Binary Usage Install Windows: $ iwr https://github.com/twitchax/augre/releases

Comments
  • Bump crossbeam-utils from 0.8.5 to 0.8.11

    Bump crossbeam-utils from 0.8.5 to 0.8.11

    Bumps crossbeam-utils from 0.8.5 to 0.8.11.

    Release notes

    Sourced from crossbeam-utils's releases.

    crossbeam-utils 0.8.11

    • Bump the minimum supported Rust version to 1.38. (#877)

    crossbeam-utils 0.8.10

    • Fix unsoundness of AtomicCell on types containing niches. (#834) This fix contains breaking changes, but they are allowed because this is a soundness bug fix. See #834 for more.

    crossbeam-utils 0.8.9

    • Replace lazy_static with once_cell. (#817)

    crossbeam-utils 0.8.8

    • Fix a bug when unstable loom support is enabled. (#787)

    crossbeam-utils 0.8.7

    • Add AtomicCell<{i*,u*}>::{fetch_max,fetch_min}. (#785)
    • Add AtomicCell<{i*,u*,bool}>::fetch_nand. (#785)
    • Fix unsoundness of AtomicCell<{i,u}64> arithmetics on 32-bit targets that support Atomic{I,U}64 (#781)

    crossbeam-utils 0.8.6

    • Re-add AtomicCell<{i,u}64>::{fetch_add,fetch_sub,fetch_and,fetch_or,fetch_xor} that were accidentally removed in 0.8.0 0.7.1 on targets that do not support Atomic{I,U}64. (#767)
    • Re-add AtomicCell<{i,u}128>::{fetch_add,fetch_sub,fetch_and,fetch_or,fetch_xor} that were accidentally removed in 0.8.0 0.7.1. (#767)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump regex from 1.5.4 to 1.5.5

    Bump regex from 1.5.4 to 1.5.5

    Bumps regex from 1.5.4 to 1.5.5.

    Changelog

    Sourced from regex's changelog.

    1.5.5 (2022-03-08)

    This releases fixes a security bug in the regex compiler. This bug permits a vector for a denial-of-service attack in cases where the regex being compiled is untrusted. There are no known problems where the regex is itself trusted, including in cases of untrusted haystacks.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(1.1.3)
Owner
Egor Dmitriev
Student at Utrecht University
Egor Dmitriev
Another approach to thread stack spoofing.

Description This Twitter thread inspired the creation of this tool. Unwinder is a PoC of how to parse PE's UNWIND_INFO structs in order to achieve "pr

Kurosh Dabbagh Escalante 132 Jan 6, 2023
Simple, extensible multithreaded background job processing library for Rust.

Apalis Apalis is a simple, extensible multithreaded background job processing library for Rust. Features Simple and predictable job handling model. Jo

null 50 Jan 2, 2023
job control from anywhere!

job-security - job control from anywhere! job-security is a tool that lets you put your running programs into background, then bring them to the foreg

Yuxuan Shui 15 Apr 23, 2023
Command line tool for cheap and efficient email automation written in Rust

Pigeon Pigeon is a command line tool for automating your email workflow in a cheap and efficient way. Utilize your most efficient dev tools you are al

null 57 Nov 20, 2022
botwork is a single-binary, generic and open-source automation framework written in Rust for acceptance testing & RPA

botwork botwork is a single-binary, generic and open-source automation framework written in Rust for acceptance testing, acceptance test driven develo

Nitimis 8 Apr 17, 2023
Holo is a suite of routing protocols designed to support high-scale and automation-driven networks.

Holo is a suite of routing protocols designed to support high-scale and automation-driven networks. For a description of what a routing protocol is, p

Renato Westphal 42 Apr 16, 2023
Nodium is an easy-to-use data analysis and automation platform built using Rust, designed to be versatile and modular.

Nodium is an easy-to-use data analysis and automation platform built using Rust, designed to be versatile and modular. Nodium aims to provide a user-friendly visual node-based interface for various tasks.

roggen 19 May 2, 2023
A file management automation tool.

organize A file management automation tool. Current Status This is in really early development. Please come back later! Background The Python organize

null 4 Jun 6, 2023
A simple to use and efficient Web Automation Tool.

teemo A simple to use and efficient Web Automation Tool. teemo allows you to do some web automation action(such as click and so on) and crawl some inf

null 3 Nov 22, 2023
Tricking shells into interactive mode when local PTY's are not available

Remote Pseudoterminals Remote Pseudoterminals or "RPTY" is a Rust library which intercepts calls to the Linux kernel's TTY/PTY-related libc functions

null 135 Dec 4, 2022