Putting a brain behind `cat`🐈‍⬛ Integrating language models in the Unix commands ecosystem through text streams.

Overview

community discussion Github Actions CI Build Status crates.io

smartcat (sc)

Puts a brain behind cat! CLI interface to bring language models in the Unix ecosystem and allow power users to make the most out of llms.

Installation

With rust and cargo installed and setup:

cargo install smartcat

(the binary is named sc)

Or download directly the binary compiled for your platform from the release page.


On the first run, smartcat will ask you to generate some default configuration files if it cannot find them. More about that in the configuration section.

A default prompt is needed for smartcat to know which api and model to hit.

Usage

Usage: sc [OPTIONS] [CONFIG_PROMPT]

Arguments:
  [CONFIG_PROMPT]  which prompt in the config to fetch [default: default]

Options:
  -r, --repeat-input
          whether to repeat the input before the output, useful to extend instead of replacing
  -p, --custom-prompt <CUSTOM_PROMPT>
          custom prompt to append before the input
  -s, --system-message <SYSTEM_MESSAGE>
          system "config"  message to send after the prompt and before the first user message
  -c, --context <CONTEXT>
          context string (will be file content if it resolves to an existing file's path) to
          include after the system message and before first user message
  -a, --after-input <AFTER_INPUT>
          suffix to add after the input and the custom prompt
      --api <API>
          overrides which api to hit [possible values: openai]
  -m, --model <MODEL>
          overrides which model (of the api) to use
  -f, --file <FILE>
          skip reading from the input and read this file instead
  -i, --input <INPUT>
          skip reading from input and use that value instead
  -h, --help
          Print help
  -V, --version
          Print version

Currently only supporting openai and chatgpt but build to work with multiple ones seemlessly if competitors emerge.

You can use it to accomplish tasks in the CLI but also in your editors (if they are good unix citizens, i.e. work with shell commands and text streams) to complete, refactor, write tests... anything!

The key to make this work seamlessly is a good default prompt that tells the model to behave like a CLI tool an not write any unwanted text like markdown formatting or explanations.

A few examples to get started 🐈‍⬛

Ask anything without leaving the confort of your terminal

sc -i "sed command to remove trailaing whitespaces at the end of all non-markdown files?"
> sed -i '' 's/[ \t]*$//' *.* !(*.md)
sc -i "shell script to migrate a repository from pipenv to poetry" >> poetry_mirgation.sh

use the -i so that it doesn't wait for piped input.

Manipulate file and text streams

cat Cargo.toml | sc -p "write a short poem about the content of the file"

A file named package,
Holds the keys of a software's age.
With a name, version, and edition too,
The content speaks of something new.

Dependencies lie within,
With toml, clap, ureq, and serde in,
The stars denote any version will do,
As long as the features are included, too.

A short poem of the file's content,
A glimpse into the software's intent.
With these keys and dependencies,
A program is born, fulfilling needs.
sc -f Cargo.toml -p "translate the following file in json" | save Cargo.json
cat my_stuff.py | \
sc -p "write a parametrized test suite for the following code using pytest" \
-s "output only the code, as a standalone file with the imports. \n" \
-a "" \
> test.py

If you find yourself reusing prompts often, you can create a dedicated config entries and it becomes the following:

sc write_tests -f my_file.py > test.py

see example in the configuration section.

Skipping input to talk directly to the model (but mind the default prompt)

sc empty -i "Do you like trains?"

So if you wonder, do I like the trains of steel and might,
My answer lies in how they're kin to code that runs so right.
Frameworks and libraries, like stations, stand so proud
And programmers, conductors, who make the engines loud.

Integrating with editors

The key for a good integration in editors is a good default prompt (or set of) combined with the -p flag for precising the task at hand. The -r flag can be used to decide whether to replace or extend the selection.

Vim

Start by selecting some text, then press :. You can then pipe the selection content to smartcat.

:'<,'>!sc -p "replace the versions with wildcards"
:'<,'>!sc -p "fix the typos in this text"

will replace the current selection with the same text transformed by the language model.

:'<,'>!sc -p "implement the traits FromStr and ToString for this struct" -r
:'<,'>!sc write_test -r

will append at the end of the current selection the result of the language model.

...

With some remapping you may have your most reccurrent action attached to few keystrokes e.g. <leader>wt!

Helix and Kakoune

Same concept, different shortcut, simply press the pipe key to redirect the selection to smarcat.

pipe:sc write_test -r

These are only some ideas to get started, go nuts!

Configuration

  • by default lives at $HOME/.config/smartcat
  • the directory can be set using the SMARTCAT_CONFIG_PATH environement variable
  • use #[<input>] as the placeholder for input when writing prompts
  • the default model is gpt-4 but I recommend trying the latest ones and see which one works best for you. I currently use gpt-4-1106-preview.

Two files are used:

.api_configs.toml

[openai]  # each api has their own config section with api and url
url = "https://api.openai.com/v1/chat/completions"
api_key = "<your_api_key>"

prompts.toml

[default]  # a prompt is a section
api = "openai"  # must refer to an entry in the `.api_configs.toml` file
model = "gpt-4-1106-preview"

[[default.messages]]  # then you can list messages
role = "system"
content = """\
You are an extremely skilled programmer with a keen eye for detail and an emphasis on readable code. \
You have been tasked with acting as a smart version of the cat unix program. You take text and a prompt in and write text out. \
For that reason, it is of crucial importance to just write the desired output. Do not under any circumstance write any comment or thought \
as you output will be piped into other programs. Do not write the markdown delimiters for code as well. \
Sometimes you will be asked to implement or extend some input code. Same thing goes here, write only what was asked because what you write will \
be directly added to the user's editor. \
Never ever write ``` around the code. \
Now let's make something great together!
"""

[empty]  # always nice to have an empty prompt available
api = "openai"
model = "gpt-4-1106-preview"
messages = []

[write_tests]
api = "openai"
model = "gpt-4-1106-preview"

[[write_tests.messages]]
role = "system"
content = """\
You are an extremely skilled programmer with a keen eye for detail and an emphasis on readable code. \
You have been tasked with acting as a smart version of the cat unix program. You take text and a prompt in and write text out. \
For that reason, it is of crucial importance to just write the desired output. Do not under any circumstance write any comment or thought \
as you output will be piped into other programs. Do not write the markdown delimiters for code as well. \
Sometimes you will be asked to implement or extend some input code. Same thing goes here, write only what was asked because what you write will \
be directly added to the user's editor. \
Never ever write ``` around the code. \
Now let's make something great together!
"""

[[write_tests.messages]]
role = "user"
# the following placeholder string #[<input>] will be replaced by the input
# each message seeks it and replaces it
content ='''Write tests using pytest for the following code. Parametrize it if appropriate.

#[<input>]
'''

see the config setup file for more details.

Developping

Some tests rely on environement variables and don't behave well with multi-threading so make sure to test with

cargo test -- --test-threads=1

State of the project

Smartcat has reached an acceptable feature set. The focus is now on upgrading the codebase quality as I hadn't really touched rust since 2019 and it shows.

TODO

  • make it available on homebrew

Ideas:

  • interactive mode to have conversations and make the model iterate on the last answer
  • fetch more context from the codebase
You might also like...
CLI utility that screencaptures your Linux desktop and streams it to Kodi via UPNP/DLNA and RTSP

desktopcast Desktopcast is a little CLI application that allows you to cast your Linux desktop to any UPNP/DLNA device capable of the AVTransfer servi

on-screen keyboard display for your coding streams.

⌨ OSKD (On-screen key display) OSKD is an on-screen keyboard display that can be used during streams. It provides an intuitive and easy-to-use interfa

Are we lang yet? A simple website providing information about the status of Rust's language development ecosystem.

Are We Lang Yet This project answers the question "Is the Rust ecosystem ready to use for language development yet?". arewelangyet.com What is this? C

A statically typed language that can deeply improve the Python ecosystem

The Erg Programming Language This is the main source code repository for Erg. This contains the compiler and documentation. 日本語 | 简体中文 | 繁體中文 Erg can

A structure editor for a simple functional programming language, with Vim-like shortcuts and commands.

dilim A structure editor for a simple functional programming language, with Vim-like shortcuts and commands. Written in Rust, using the Yew framework,

auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.
auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.

Auto Rust auto-rust is an experimental project that aims to automatically generate Rust code with LLM (Large Language Models) during compilation, util

🦀Rust + Large Language Models - Make AI Services Freely and Easily. Inspired by LangChain

llmchain: Modern Data Transformations with LLM 🦀 + Large Language Models, inspired by LangChain. Features Models: LLMs & Chat Models & Embedding Mode

Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors!
Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors!

Oatmeal Terminal UI to chat with large language models (LLM) using different model backends, and integrations with your favourite editors! Overview In

An implementation of Piet's text interface using cosmic-text

piet-cosmic-text Implements piet's Text interface using the cosmic-text crate. License piet-cosmic-text is free software: you can redistribute it and/

Comments
  • is there a way to use openAI/chatGPT without a API key - but as chatGPT plus user?

    is there a way to use openAI/chatGPT without a API key - but as chatGPT plus user?

    I have this question for a while, do you have some ideas on this?

    I am wondering if someone using rust, figured out some way to interact with the chatGPT plus user account, without using the API key (as a chatGPT user, it seems paying twice? - pay for API and at the same time pay for chatGPT plus user account?)

    Thanks

    opened by ipstone 0
Releases(0.2.0)
A simple CLI to build VEXCode V5 Pro projects and download them to the V5 brain.

vexer A simple CLI to build VEXCode V5 Pro projects and download them to the V5 brain. (WIP) This currently is only tested on and only works on window

null 2 May 16, 2022
Rust library for putting things in a grid

rust-term-grid This library arranges textual data in a grid format suitable for fixed-width fonts, using an algorithm to minimise the amount of space

Benjamin Sago 61 Nov 4, 2022
Putting the 1992 Putnam Test question A-6 into code.

Problem If you choose 4 points on a sphere and consider the tetrahedron with these points as it's vertices, what is the probability that the center of

Elias 3 Sep 17, 2023
Workflows make it easy to browse, search, execute and share commands (or a series of commands)--without needing to leave your terminal.

Workflows The repo for all public Workflows that appear within Warp and within commands.dev. To learn how to create local or repository workflows, see

Warp 369 Jan 2, 2023
Workflows make it easy to browse, search, execute and share commands (or a series of commands)--without needing to leave your terminal.

Workflows The repo for all public Workflows that appear within Warp and within commands.dev. To learn how to create local or repository workflows, see

Warp 227 Jun 1, 2022
Rust library for integrating local LLMs (with llama.cpp) and external LLM APIs.

Table of Contents About The Project Getting Started Roadmap Contributing License Contact A rust interface for the OpenAI API and Llama.cpp ./server AP

Shelby Jenkins 4 Dec 18, 2023
A cat(1) clone with syntax highlighting and Git integration.

A cat(1) clone with syntax highlighting and Git integration. Key Features • How To Use • Installation • Customization • Project goals, alternatives [中

David Peter 38.9k Jan 8, 2023
Chaos Cat brings destruction and suffering to your software

Chaos Cat brings destruction and suffering to your software. When Chaos Cat is loaded it will randomly make predefined syscalls fail. This tests your software for the the things you might have forgotten to check because operating systems usually Just Work™ and syscalls usually never fail.

Ossi Herrala 2 Oct 8, 2022
Failed experiment in downloading random cat image, turning it into ascii and displaying it in Neovim.

cat.nvim Failed experiment in downloading random cat image, turning it into ascii and displaying it in Neovim. Failed as I realized far too late, that

James Vero 4 Aug 5, 2022
Captures packets and streams them to other devices. Built for home network analysis and A&D CTFs.

?? shiny-donut shiny-donut is a packet capture app that supports streaming packets from a remote system to another device. The main use for this is to

Justin Perez 3 Nov 30, 2022