Solving context limits when working with AI LLM models by implementing a "chunkable" attribute on your prompt structs.

Overview

Promptize

Promptize attempts to solve the issues with context limits when working with AI systems. It allows a user to add an attribute to their struct called "Promptize" with a "chunkable" attribute on a specific field.

If the prompt exceeds the allowable token size for your model, then the "chunkable" field will be chunked up into equal sized parts that fit within the context limit.

The fields from the original struct are repeated in a vec of a new version of that struct but with the chunked fields being broken up accordingly.

It's up to the caller to call openai multiple times for each prompt that is returned as a result of chunking.

Example Usage

use promptize::Promptize;

#[derive(Promptize, Debug, serde::Serialize)]
pub struct SomePrompt {
    system_prompt: String,
    user_prompt: String,
    filename: String,
    #[chunkable]
    large_content: String
}

fn main() {
    const TOKEN_CONTEXT_LIMIT:i32 = 8192;
    const MAX_CHUNKS:i32 = 5;

    let foo = SomePrompt::builder()
        .system_prompt("System prompt here".to_string())
        .user_prompt("User prompt here".to_string())
        .filename("huge_file.rs".to_string())
        .large_content("some huge amount of content here".to_string())
        .build_prompt("gpt-4", TOKEN_CONTEXT_LIMIT, MAX_CHUNKS)?;
}

Limitations

  1. Only 1 field can be "chunkable"
  2. A chunkable field has a limit to how many times it can be chunked to fit within the context limit
  3. There are some dependencies like tiktoken-rs, serde, anyhow
  4. I chose to use tiktoken's prompt struct. It's up to you to map this back to whatever struct you're using

Other considerations

  1. It's up to the caller to handle the response and call open-ai multiple times for each prompt
You might also like...
Prompt Description Language [POC]

Prompt Description Language (V0.1.1 POC) Description PDL (Prompt Description Language) format provides an extensible way to describe the behavior and

An attribute macro to simplify writing simple command line applications.

fncli An attribute macro to simplify writing simple command line applications. Example #[fncli::cli] fn main(a: i32, b: i32) { println!("{}", a +

Rust-advent - Learning Rust by solving advent of code challenges (Streaming live on Twitch every Monday)
Rust-advent - Learning Rust by solving advent of code challenges (Streaming live on Twitch every Monday)

Rust advent 🦀 🐚 Learning Rust by implementing solutions for Advent of Code problems. 🎥 HEY, we are live-streaming our attempts to solve the exercis

Repository for solving adventofcode.com puzzles

Advent Of Code 🎄 This is a repository for any Otovista that wants to participate in the advent of code christmas calendar challenges. Advent of Code

Fast package resolver written in Rust (CDCL based SAT solving)
Fast package resolver written in Rust (CDCL based SAT solving)

Resolvo: Fast package resolver written in Rust Resolvo implements a fast package resolution algorithm based on CDCL SAT solving. If resolvo is unable

An easy to use library for pretty print tables of Rust structs and enums.
An easy to use library for pretty print tables of Rust structs and enums.

tabled An easy to use library for pretty printing tables of Rust structs and enums. Table of Contents Usage Settings Style Themes ASCII Psql Github Ma

Derive forms from structs.

leptos_form: Derive leptos forms from rust structs Documentation Docs GitHub repository Cargo package Minimum supported Rust version: 1.75.0 or later

An LLM-powered (CodeLlama or OpenAI) local diff code review tool.

augre An LLM-powered (CodeLlama or OpenAI) local diff code review tool. Binary Usage Install Windows: $ iwr https://github.com/twitchax/augre/releases

Rust library for integrating local LLMs (with llama.cpp) and external LLM APIs.

Table of Contents About The Project Getting Started Roadmap Contributing License Contact A rust interface for the OpenAI API and Llama.cpp ./server AP

Owner
Dan Nelson
Dan Nelson
Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors!

Oatmeal Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors! Overview In

Dustin Blackman 88 Dec 4, 2023
a Rust library implementing safe, lightweight context switches, without relying on kernel services

libfringe libfringe is a library implementing safe, lightweight context switches, without relying on kernel services. It can be used in hosted environ

edef 473 Dec 28, 2022
auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.

Auto Rust auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, util

Minsky 6 May 14, 2023
A Rust LLaMA project to load, serve and extend LLM models

OpenLLaMA Overview A Rust LLaMA project to load, serve and extend LLM models. Key Objectives Support both GGML and HF(HuggingFace) models Support a st

Compute IO 4 Apr 9, 2024
🐢 Atuin replaces your existing shell history with a SQLite database, and records additional context for your commands

Atuin replaces your existing shell history with a SQLite database, and records additional context for your commands. Additionally, it provides optional and fully encrypted synchronisation of your history between machines, via an Atuin server.

Ellie Huxtable 4.6k Jan 1, 2023
Accompanying code for my talk "No free lunch: Limits of Wasm as a bridge from Rust to JS" presented @ EuroRust2022 in Berlin

No free lunch: Limits of Wasm as a bridge from Rust to JS Accompanying code for the talk I presented at EuroRust 2022 in Berlin, Germany Slides for th

Alberto Schiabel 11 Dec 30, 2022
A very opinionated, zero-configuration shell prompt

A very opinionated, zero-configuration shell prompt

amy null 8 Nov 4, 2021
Shellfirm - Intercept any risky patterns (default or defined by you) and prompt you a small challenge for double verification

shellfirm Opppppsss you did it again? ?? ?? ?? Protect yourself from yourself! rm -rf * git reset --hard before saving? kubectl delete ns which going

elad 652 Dec 29, 2022
☄🌌️ The minimal, blazing-fast, and infinitely customizable prompt for any shell

☄??️ The minimal, blazing-fast, and infinitely customizable prompt for any shell

Starship Command 31.6k Dec 30, 2022
A super simple prompt for Fish shell, just shows git info and Vi mode.

vifi is a portmandeau of 'Vi' and 'Fish', because it's a prompt for Fish shell, primarily focused around showing proper indicators when using Vi key bindings.

Mat Jones 1 Sep 15, 2022