auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.

Overview

Auto Rust

auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.

⚠️ Warning

Please note that Auto-Rust is currently under development and is not yet suitable for production use. While you are welcome to try it out and provide feedback, we caution that it may have an incomplete implementation and may not function as intended.

Installation

[dependencies]
auto-rust = "0.1.0"

You need to create a .env file in the root of your project with the following content:

OPENAI_API_KEY=<your-openai-api-key>

Example

use auto_rust::auto_implement;

#[auto_implement]
#[doc = "This function calculates if the input is a valid email address without use regex."]
fn is_email(input: String) -> bool {
    todo!()
}

fn main() {
    let result = is_email("bregyminsky.cc".to_string());
    println!("result: {}", result);
}

Limitations

  • LLMs are non-deterministic. If you don't get what you want from the first attempt - iterate, experiment with it.
  • Actually, the context is limited by whatever code is in your function definition, no knowledge of project structure or other files.
  • The extension doesn't add imports when new dependencies are introduced in the created code.

Contributing

Contributions are welcome. Feel free to open an issue if you have any questions or want to suggest an improvement.

You might also like...
wait what? generate your entire infra from rust macros

infra as macro translate rust structs to terraform at compile time // your infra is a macro static DB: Postgres = Postgres16! { host: "env.host",

A CLI tool that uses ChatGPT to automatically generate commit messages.

Auto Git Commit This project is a tool that uses the OpenAI GPT model to automatically generate commit messages for Git commits based on the changes m

An uncluttered blackboard, ideal for simple sketches during online meetings
An uncluttered blackboard, ideal for simple sketches during online meetings

lavagna It's a blackboard, not a lasagna. Lavagna is a "no frills" blackboard, ideal for simple sketches during online meetings. You have just a black

Cost saving K8s controller to scale down and up of resources during non-business hours

Kube-Saver Motivation Scale down cluster nodes by scaling down Deployments, StatefulSet, CronJob, Hpa during non-business hours and save $$, but if yo

Kill processes protected by antivirus during offensive activities.
Kill processes protected by antivirus during offensive activities.

superman Kill everything. usage Options: -p, --pid PID Pid to kill -r Recursive kill process -t, --time TIME Kill interv

 Cover your tracks during Linux Exploitation by leaving zero traces on system logs and filesystem timestamps.
Cover your tracks during Linux Exploitation by leaving zero traces on system logs and filesystem timestamps.

moonwalk-back Cover your tracks during Linux Exploitation / Penetration Testing by leaving zero traces on system logs and filesystem timestamps. 📖 Ta

Slack chat bot written in Rust that allows the user to interact with a large language model.
Slack chat bot written in Rust that allows the user to interact with a large language model.

A Slack chat bot written in Rust that allows the user to interact with a large language model. Creating an App on Slack, first steps Go to https://api

Attempt to summarize text from `stdin`, using a large language model (locally and offline), to `stdout`

summarize-cli Attempt to summarize text from stdin, using a large language model (locally and offline), to stdout. cargo build --release target/releas

Cloud Native Buildpack that builds an OCI image with Ollama and a large language model.
Cloud Native Buildpack that builds an OCI image with Ollama and a large language model.

Ollama Cloud Native Buildpack This buildpack builds an OCI image with Ollama and a large language model. Configure your model by an Ollama Modelfile o

Comments
  • Implement a runtime version for `auto-implement`

    Implement a runtime version for `auto-implement`

    Create a runtime version for auto-implement, this must to use a one different method to calculate the response of the function. The main idea is create a macro that implement a "compiled query" to code to invoke OpenAI API at runtime stage, we can capture the input nature at compilation time and calculate the output based on the input at runtime time. I think a good starter point is read and explore the marvin's implementation: https://github.com/PrefectHQ/marvin/blob/35582ba2142d2fb6826e97416e7f0afc84d4c721/src/marvin/ai_functions/base.py

    opened by bregydoc 0
Owner
Minsky
Our goal is to promote the research, development, and use of open technologies in an accessible way for the majority of society.
Minsky
Js-macros - Quickly prototype Rust procedural macros using JavaScript or TypeScript!

js-macros Quickly prototype Rust procedural macros using JavaScript or TypeScript! Have you ever thought "this would be a great use case for a procedu

null 15 Jun 17, 2022
A Rust LLaMA project to load, serve and extend LLM models

OpenLLaMA Overview A Rust LLaMA project to load, serve and extend LLM models. Key Objectives Support both GGML and HF(HuggingFace) models Support a st

Compute IO 4 Apr 9, 2024
🦀Rust + Large Language Models - Make AI Services Freely and Easily. Inspired by LangChain

llmchain: Modern Data Transformations with LLM ?? + Large Language Models, inspired by LangChain. Features Models: LLMs & Chat Models & Embedding Mode

Shafish Labs 63 Jun 22, 2023
Solving context limits when working with AI LLM models by implementing a "chunkable" attribute on your prompt structs.

Promptize Promptize attempts to solve the issues with context limits when working with AI systems. It allows a user to add an attribute to their struc

Dan Nelson 5 Jul 18, 2023
git-cliff can generate changelog files from the Git history by utilizing conventional commits as well as regex-powered custom parsers.⛰️

git-cliff can generate changelog files from the Git history by utilizing conventional commits as well as regex-powered custom parsers. The changelog template can be customized with a configuration file to match the desired format.

Orhun Parmaksız 5k Jan 9, 2023
Projects worked on during Juno Code and Chill sessions.

Juno Code and Chill projects These projects are created during Juno "Code and Chill" session in the Juno discord. Project Description cw721-piggy-bank

Junø 6 May 2, 2023
Just in time Oxidation. A little experimentation with inkwell and copy-and-patch compilation

Experimental Copy-and-Patch compiler backend written in Rust This is supposed to become an experimental compiler backend based on Copy and Patch to be

Michael Zinsmeister 4 Feb 27, 2024
A diff-based data management language to implement unlimited undo, auto-save for games, and cloud-apps which needs to retain every change.

Docchi is a diff-based data management language to implement unlimited undo, auto-save for games, and cloud-apps which needs to save very often. User'

juzy 21 Sep 19, 2022
An LLM-powered (CodeLlama or OpenAI) local diff code review tool.

augre An LLM-powered (CodeLlama or OpenAI) local diff code review tool. Binary Usage Install Windows: $ iwr https://github.com/twitchax/augre/releases

Aaron Roney 4 Oct 19, 2023
it aims to augment git with primitives to build integrated, cryptographically verifiable collaboration workflows around source code

it aims to augment git with primitives to build integrated, cryptographically verifiable collaboration workflows around source code. It maintains the distributed property of git, not requiring a central server. it is transport agnostic, and permits data dissemination in client-server, federated, as well as peer-to-peer network topologies.

Kim Altintop 4 Jan 16, 2023