The fastest CLI tool for prompting LLMs. Including support for prompting several LLMs at once!

Overview

cai - The fastest CLI tool for prompting LLMs

Features

  • Build with Rust ๐Ÿฆ€ for supreme performance and speed! ๐ŸŽ๏ธ
  • Support for models by Groq, OpenAI, Anthropic, and local LLMs. ๐Ÿ“š
  • Prompt several models at once. ๐Ÿคผ Demo of cai's all command
  • Syntax highlighting for better readability of code snippets. ๐ŸŒˆ

Demo

cai demo

Installation

cargo install cai

Usage

Before using Cai, an API key must be set up. Simply execute cai in your terminal and follow the instructions.

Cai supports the following APIs:

Afterwards, you can use cai to run prompts directly from the terminal:

cai List 10 fast CLI tools

Or a specific model, like Anthropic's Claude Opus:

cai op List 10 fast CLI tools

Full help output:

$ cai help
Cai 0.6.0

The fastest CLI tool for prompting LLMs

Usage: cai [OPTIONS] [PROMPT]... [COMMAND]

Commands:
  groq       Groq [aliases: gr]
  ll         - Llama 3 shortcut (๐Ÿ† Default)
  mi         - Mixtral shortcut
  openai     OpenAI [aliases: op]
  gp         - GPT-4o shortcut
  gm         - GPT-4o mini shortcut
  anthropic  Anthropic [aliases: an]
  cl         - Claude Opus
  so         - Claude Sonnet
  ha         - Claude Haiku
  llamafile  Llamafile server hosted at http://localhost:8080 [aliases: lf]
  ollama     Ollama server hosted at http://localhost:11434 [aliases: ol]
  all        Simultaneously send prompt to each provider's default model:
                 - Groq Llama3
                 - Antropic Claude Sonnet 3.5
                 - OpenAI GPT-4o mini
                 - Ollama Llama3
                 - Llamafile
  help       Print this message or the help of the given subcommand(s)

Arguments:
  [PROMPT]...  The prompt to send to the AI model

Options:
  -r, --raw   Print raw response without any metadata
  -j, --json  Prompt LLM in JSON output mode
  -h, --help  Print help


Examples:
  # Send a prompt to the default model
  cai Which year did the Titanic sink

  # Send a prompt to each provider's default model
  cai all Which year did the Titanic sink

  # Send a prompt to Anthropic's Claude Opus
  cai anthropic claude-opus Which year did the Titanic sink
  cai an claude-opus Which year did the Titanic sink
  cai cl Which year did the Titanic sink
  cai anthropic claude-3-opus-20240229 Which year did the Titanic sink

  # Send a prompt to locally running Ollama server
  cai ollama llama3 Which year did the Titanic sink
  cai ol ll Which year did the Titanic sink

  # Add data via stdin
  cat main.rs | cai Explain this code

Related

  • AI CLI - Get answers for CLI commands from ChatGPT. (TypeScript)
  • AIChat - All-in-one chat and copilot CLI for 10+ AI platforms. (Rust)
  • ja - CLI / TUI app to work with AI tools. (Rust)
  • llm - Access large language models from the command-line. (Python)
  • smartcat - Integrate LLMs in the Unix command ecosystem. (Rust)
  • tgpt - AI chatbots for the terminal without needing API keys. (Go)
You might also like...
Supercharge your markdown including RSCx components.

rscx-mdx Render Markdown into HTML, while having custom RSCx components inside. Usage use rscx::{component, html, props}; use rscx_mdx::mdx::{Mdx, Mdx

Not the fastest terminal colors library. Don't even ask about size.
Not the fastest terminal colors library. Don't even ask about size.

TROLOLORS Not the fastest terminal colors library. Don't even ask about size. Why? Don't even try to use it. But maybe you need to say to your boss th

The fastest bloom filter in Rust. No accuracy compromises. Use any hasher.
The fastest bloom filter in Rust. No accuracy compromises. Use any hasher.

b100m-filter The fastest bloom filter in Rust. No accuracy compromises. Use any hasher. Usage # Cargo.toml [dependencies] b100m-filter = "0.3.0" use b

The fastest memoizing and caching Python library written in Rust.

Cachebox Cachebox is a Python library (written in Rust) that provides memoizations and cache implementions with different cache replecement policies.

Fastest GTF/GFF-to-BED converter chilling around
Fastest GTF/GFF-to-BED converter chilling around

gxf2bed The fastest G{F,T}F-to-BED converter around the block! translates chr27 gxf2bed gene 17266470 17285418 . + . gene_id "ENSG00000151743"; chr27

A CLI tool to get help with CLI tools ๐Ÿ™
A CLI tool to get help with CLI tools ๐Ÿ™

A CLI tool to get help with CLI tools ๐Ÿ™ halp aims to help find the correct arguments for command-line tools by checking the predefined list of common

A TUI for your todos built in Rust with full CLI support.

todui TUI Features This app allows for almost anythig you would need when dealing with todos: Create, edit, and delete tasks Add links to tasks Add du

Rust File Management CLI is a command-line tool written in Rust that provides essential file management functionalities. Whether you're working with files or directories, this tool simplifies common file operations with ease.

Rust FileOps Rust File Management CLI is a command-line tool written in Rust that provides essential file management functionalities. Whether you're w

CLI Tool for tagging and organizing files by tags.

wutag ๐Ÿ”ฑ ๐Ÿท๏ธ CLI tool for tagging and organizing files by tags. Install If you use arch Linux and have AUR repositories set up you can use your favour

Comments
  • Local provider provider URL not configurable / doesn't use standard Ollama URL/port

    Local provider provider URL not configurable / doesn't use standard Ollama URL/port

    It seems odd that cai tries to use Ollama on a non-standard port (8080).

    I think it should use or at least default to the standard Ollama URL of http://127.0.0.1:11343 - this would allow people to install cai and just start using it without having to customise Ollama or or other compatible tooling that uses Ollama default port 11343.

    cai local test
    ERROR:
    error sending request for url (http://localhost:8080/v1/chat/completions)
    
    ollama serve
    listen tcp 127.0.0.1:11434
    
    opened by sammcj 3
  • feat: add local llm url env, default to standard Ollama URL

    feat: add local llm url env, default to standard Ollama URL

    fixes #4

    • Add env for local LLM URL (defaults to Ollama / http://127.0.0.1:11434 - understand if you want to change this to whatever other URL your server uses)
    opened by sammcj 1
Releases(v0.6.0)
Owner
Adrian Sieber
CEO @Airsequel | Haskell, Elm, PureScript, Rust
Adrian Sieber
Blockoli is a high-performance tool for code indexing, embedding generation and semantic search tool for use with LLMs.

blockoli ???? Blockoli is a high-performance tool for code indexing, embedding generation and semantic search tool for use with LLMs. blockoli is buil

Asterisk 76 Jul 24, 2024
A command line application which sets your wall paper with new image generating pollens once they arrive.

pollenwall Table of Contents pollenwall About Installation Binary releases Build from source Usage Command Line Arguments Running as a service MacOS L

Pollinations.AI 2 Jan 7, 2022
This tool will profile official instances of OpenSUSE mirrorcache to determine the fastest repositories for your system

Mirror Magic tool to Magically make OpenSUSE Mirrors Magic-er This tool will profile official instances of OpenSUSE mirrorcache to determine the faste

Firstyear 30 Dec 22, 2022
A program that provides LLMs with the ability to complete complex tasks using plugins.

SmartGPT SmartGPT is an experimental program meant to provide LLMs (particularly GPT-3.5 and GPT-4) with the ability to complete complex tasks without

Corman 8 Apr 19, 2023
TUI interface for LLMs written in Rust ๐Ÿ”ฅ

Tenere TUI interface for LLMs written in Rust ?? Demo ?? Supported LLMs Only ChatGPT is supported for the moment. But I'm planning to support more mod

BADR 22 Apr 22, 2023
Rust library for integrating local LLMs (with llama.cpp) and external LLM APIs.

Table of Contents About The Project Getting Started Roadmap Contributing License Contact A rust interface for the OpenAI API and Llama.cpp ./server AP

Shelby Jenkins 4 Dec 18, 2023
๐Ÿ›  SmartGPT is an experimental program meant to provide LLMs

?? SmartGPT is an experimental program meant to provide LLMs (particularly GPT-3.5 and GPT-4) with the ability to complete complex tasks without user input by breaking them down into smaller problems, and collecting information using the internet and other external sources.

n0y0u 3 Feb 25, 2024
Use LLMs to generate strongly-typed values

Magic Instantiate Quickstart use openai_magic_instantiate::*; #[derive(MagicInstantiate)] struct Person { // Descriptions can help the LLM unders

Grant Slatton 4 Feb 20, 2024
JiaShiwen 12 Nov 5, 2022
Rust low-level minimalist APNG writer and PNG reader with just a few dependencies with all possible formats coverage (including HDR).

project Wiki https://github.com/js29a/micro_png/wiki at glance use micro_png::*; fn main() { // load an image let image = read_png("tmp/test.

jacek SQ6KBQ 8 Aug 30, 2023