Ask the Terminal Anything (ATA): ChatGPT in the terminal

Overview

ata: Ask the Terminal Anything

ChatGPT in the terminal

asciicast

TIP:
Run a terminal with this tool in your background and show/hide it with a keypress.
This can be done via: Iterm2 (Mac), Guake (Ubuntu), scratchpad (i3/sway), or the quake mode for the Windows Terminal.

Productivity benefits

  • The terminal starts more quickly and requires less resources than a browser.
  • The keyboard shortcuts allow for quick interaction with the query. For example, press CTRL + c to cancel the stream, CTRL + ↑ to get the previous query again, and CTRL + w to remove the last word.
  • A terminal can be set to run in the background and show/hide with one keypress. To do this, use iTerm2 (Mac), Guake (Ubuntu), scratchpad (i3/sway), or the quake mode for the Windows Terminal.
  • The prompts are reproducible because each prompt is sent as a stand-alone prompt without history. Tweaking the prompt can be done by pressing CTRL + ↑ and making changes.

Usage

Download the binary for your system from Releases. If you're running Arch Linux, then you can use the AUR packages: ata, ata-git, or ata-bin.

To specify the API key and some basic model settings, start the application. It should give an error and the option to create a configuration file called ata.toml for you. Press y and ENTER to create a ata.toml file.

Next, request an API key via https://beta.openai.com/account/api-keys and update the key in the example configuration file.

For more information, see:

$ ata --help

FAQ

How much will I have to pay for the API?

Using OpenAI's API for chat is very cheap. Let's say that an average response is about 500 tokens, so costs $0.001. That means that if you do 100 requests per day, which is a lot, then that will cost you about $0.10 per day ($3 per month). OpenAI grants you $18.00 for free, so you can use the API for about 180 days (6 months) before having to pay.

Can I build the binary myself?

Yes, you can clone the repository and build the project via Cargo. Make sure that you have Cargo installed and then run:

$ git clone https://github.com/rikhuijzer/ata.git

$ cd ata/

$ cargo build --release

After this, your binary should be available at target/release/ata (Unix-based) or target/release/ata.exe (Windows).

Comments
  • Responses appear empty

    Responses appear empty

    $ ata --config=.config/ata.toml                                                                                                                                                                                         ~
    Ask the Terminal Anything
    
    model: text-davinci-003
    max_tokens: 500
    temperature: 0
    
    Prompt: 
    Hi, who are you
    
    Response: 
    
    
    Prompt: 
    Hi
    
    Response: 
    
    
    Prompt: 
    test
    
    Response: 
    
    
    Prompt: 
    
    
    opened by Strykar 5
  • Version 2.0

    Version 2.0

    This is going to be a bit of a strange pull request and I do apologize for that.

    There are many very good reasons you may not want this massive patch.

    Starting with the fact that I developed it entirely on my Android phone as the final project of a personal challenge to see how complicated of a patch I could actually write this way:

    Screenshot_20230303-103436

    I consider the test over now because there's really nothing to do beyond this; all it does is just take longer but I feel like I've proven you can do everything. Also wrote several websites and patched ffmpeg.

    Nevertheless, a lot of time did go into it. And I think my version is better because it has many features that yours or not:

    • You can provide any API key/value pair as of 2023-03-02
    • You can have multiple named configurations
    • It does not do the very unsafe thing of recommending that you write configuration to the current directory
    • You can provide no configuration at all and it will figure out where it could go and even offer to write you a skeleton.

    Either way have fun 😊

    opened by ctrlcctrlv 4
  • Current release for aarch64 on macOS has the wrong cpu type

    Current release for aarch64 on macOS has the wrong cpu type

    Hi,

    I downloaded ata-aarch64-apple-darwin and tried to run it in the Terminal and got -bash: /Users/plumps/Downloads/ata-aarch64-apple-darwin: Bad CPU type in executable as response.

    Checking the file type with file, showed that:

    file ~/Downloads/ata-aarch64-apple-darwin 
    /Users/plumps/Downloads/ata-aarch64-apple-darwin: Mach-O 64-bit executable **x86_64**
    
    opened by mi-skam 3
  • `invalid type` error occurred when attempting to set the temperature

    `invalid type` error occurred when attempting to set the temperature

    message: "invalid type: floating point 0.5, expected i64"

    It appears that setting a floating point temperature, such as 0.5 or 0.8, is not possible.

    opened by XOKP 1
  • Set the `max_tokens` default to 4096

    Set the `max_tokens` default to 4096

    Thanks to @marioseixas. I'm a bit hesitant to set it to Inf because that could mean that the tool could in theory produce in an infinitely long response which would cause an infinitely high bill. Very unlikely, but who knows. Having 4096 tokens printed to the terminal is already pretty long.

    • Closes #25.
    opened by rikhuijzer 0
  • max_tokens parameter

    max_tokens parameter

    Hello

    congratz on this awesome tool

    Please remove the max_tokens parameter, so it could default to (4096 - prompt tokens)

    You should not set the max_token, according to the official openAI API, the default value is Inf, and the model can process up to 4096 tokens of information.

    222904560-73acded3-006f-4b98-bd91-cba58d91e45a
    opened by marioseixas 0
  • Support `gpt-3.5-turbo`

    Support `gpt-3.5-turbo`

    Support the newly released chat API (released 1st of March 2023): gpt-3.5-turbo. This model is better than the text models according to Greg Brockman. Also, gpt-3.5-turbo is 10 times cheaper than text-davinci-003 per token. Due to the increased complexity in supporting text models and chat models and me having only so much time in the day, this PR drops support for text models such as text-davinci-003. To support those again, tests should be added to the repository which mock both the OpenAI API and stdout (see https://github.com/kkawakam/rustyline/issues/652#issuecomment-1421314307 for a stdout mock example).

    This PR is partially based on #21.

    • Closes #19
    opened by rikhuijzer 0
  • Allow running of chat models such as `gpt-3.5-turbo-0301`

    Allow running of chat models such as `gpt-3.5-turbo-0301`

    Currently:

    Ask the Terminal Anything
    
    model: gpt-3.5-turbo-0301
    max_tokens: 500
    temperature: 0.8
    
    Prompt:
    hi there
    
    Error:
    {
      "error": {
        "message": "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?",
        "type": "invalid_request_error",
        "param": "model",
        "code": null
      }
    }
    
    opened by rikhuijzer 0
  • Automatically retry `server_error`

    Automatically retry `server_error`

    This currently fails immediately:

    Error:
    {
      "error": {
        "message": "The server had an error while processing your request. Sorry about that!",
        "type": "server_error",
        "param": null,
        "code": null
      }
    }
    

    Given that this happens quite frequently in my experience and that it is solved by doing another request, ata should probably try again after a second

    opened by rikhuijzer 0
  • about issue #25 max_tokens parameter, you did not get the idea

    about issue #25 max_tokens parameter, you did not get the idea

    Hi

    I think you did a mistake in your last commit.

    Setting the max_tokens to 4096 will output always an error like this: Screenshot_2023-03-08_14-47-16

    I said for you to remove every mention on the code about max_tokens or tokens parameters so that it defaults to Inf (4096 - prompt tokens). Inf is not infinite, Inf equals to 4096 - prompt tokens, so that no one needs to keep changing the ata.toml to reduce or to increase the max_tokens limit. It will automatically make the calculation:

    223509914-002b1be8-5e13-4020-b638-20bcc2de9e41

    Who explained that to me was the awesome dev Jack Wong on a issue in his incredible app: https://github.com/jw-12138/davinci-web/issues/8

    opened by marioseixas 0
  • Support chat (with a chat history, that is, a conversation)

    Support chat (with a chat history, that is, a conversation)

    This should be very much doable. To implement it, add an extra handler for CTRL + L and make that clear the message history. The handler was already added in https://github.com/rikhuijzer/ata/pull/23.

    opened by rikhuijzer 1
  • Try reading `ata.toml` from some default paths

    Try reading `ata.toml` from some default paths

    This would make it easier to add the binary to the bin folder, place the config in some default path, and then run

    $ ata
    

    without extra flags.

    For now, this can be achieved by adding the following script to the bin folder (for example, ~/.local/bin):

    #!/usr/bin/env bash
    
    /path/to/ata/binary --config=/path/to/config/toml "[email protected]"
    

    Thanks to "[email protected]", this will accept arbitrary arguments to ata.

    opened by rikhuijzer 1
  • Cancel should have cooldown

    Cancel should have cooldown

    When canceling a prompt with CTRL + C, it is very tricky to ensure that the stream doesn't end at the same time. If that happens, ata will shut down.

    To solve this, there should be a timeout of a few seconds after the stream has finished in which CTRL + C is ignored.

    opened by rikhuijzer 0
Releases(v2.0.0)
Owner
Rik Huijzer
PhD student at University of Groningen. You can contact me at [email protected] or https://julialang.zulipchat.com
Rik Huijzer
Detect whether the current terminal supports rendering hyperlinks

Detects whether the current terminal supports hyperlinks in terminal emulators. It tries to detect and support all known terminals and terminal famili

Kat Marchán 19 Sep 14, 2022
Not the fastest terminal colors library. Don't even ask about size.

TROLOLORS Not the fastest terminal colors library. Don't even ask about size. Why? Don't even try to use it. But maybe you need to say to your boss th

Dmitriy Kovalenko 15 Oct 27, 2021
Achieve it! How you ask? Well, it's pretty simple; just use greatness!

Greatness! Achieve it! How you ask? Well, it's pretty simple; just use greatness! Disclaimer I do not believe that greatness is the best. It fits a me

Isacc Barker (Milo Banks) 107 Sep 28, 2022
The fastest way to identify any mysterious text or analyze strings from a file, just ask `lemmeknow` !

The fastest way to identify anything lemmeknow ⚡ Identify any mysterious text or analyze strings from a file, just ask lemmeknow. lemmeknow can be use

Swanand Mulay 594 Dec 30, 2022
ChatGPT CLI - A minimal assistant in the terminal

ChatGPT CLI A lightweight ChatGPT CLI - Chat completion. Interact with ChatGPT from your terminal and save the conversation in a text file. Get starte

Imad E. 5 Mar 14, 2023
A webring of people who make cool stuff. technology, music, art, writing, anything goes!

a webring of people who make cool stuff. technology, music, art, writing, anything goes!

Kognise 44 Dec 6, 2022
Donate To Charity (A.K.A. Public Goods) Without Lossing Anything

Donate To Charity (A.K.A. Public Goods) Without Lossing Anything

Gajesh Naik 10 Mar 3, 2022
A canvas on which you can draw anything with ease before drawing the pixels on your small hardware display.

embedded-canvas    canvas - a piece of cloth backed or framed as a surface for a painting NOTE: This crate is still in development and may have breaki

Lechev.space 13 Aug 31, 2022
Watch for changes on a webpage and do anything with it!

Sukurappa Watch for changes on a webpage and do anything with it! Install With cargo: cargo install sukurappa Or use the install-script and add $HOME/

Jean-Philippe Bidegain 2 Sep 4, 2022
Extend anything with WebAssembly.

Welcome! Please note: this project still under active development. It's usable, but expect some rough edges while work is underway. If you're interest

Extism 1.5k Jan 3, 2023
[WIP] Store bookmarks to anything

Handyman - store bookmarks to anything Handyman Acronym's Noticeably Dumb, Yet Makes A Name The motivation Internet browsers have bookmarks. File mana

Mateusz Koteja 2 Nov 8, 2022
OpenAI's ChatGPT API wrapper for Rust 🦀

Regarding API changes from December 11th 2022 OpenAI made a change to API, and now requires a cloudflare clearance token. Due to this, authentication

Maksim 15 Jan 3, 2023
🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

ChatGPT ChatGPT Desktop Application ?? Install ?? Update Log ?? History versions... Windows From our github releases: ChatGPT_0.7.4_x64_en-US.msi Or i

lencx 3.8k Jan 6, 2023
OpenAI ChatGPT desktop app for Mac, Windows, & Linux menubar using Tauri & Rust

ChatGPT Desktop App Unofficial open source OpenAI ChatGPT desktop app for mac, windows, and linux menubar using tauri & rust. Downloads Windows (2.7 M

Sonny Lazuardi 732 Jan 5, 2023
GTK 4 front-end to ChatGPT completions written in Rust

ChatGPT GUI Building git clone [email protected]:teunissenstefan/chatgpt-gui.git cd chatgpt-gui cargo build --release Todo Connect insert_text to only al

Stefan Teunissen 6 Mar 12, 2023
Rust-based CLI library for OpenAI ChatGPT API

This is a Rust library that provides a CLI (command-line interface) for easy and convenient access to the OpenAI ChatGPT API. The library offers a simple and straightforward way to integrate the ChatGPT API into your Rust-based applications.

bigduu 10 Mar 9, 2023
QQ ChatBot with ChatGPT api, implement in Rust

qbot QQChatBot with ChatGPT gpt-3.5-turbo api Rust 实现,抽象了 cqhttp 的收发消息 msg、openai ai 的消息对象,具有一定管理权限、预设机器人角色、机器人指令等功能。 原理上是接受 cqhttp 的 local websocket

Eucalypt 4 Mar 20, 2023
Communicate with OpenAi's GPT3.5 (ChatGPT) API.

ChatGPT Rust Communicate with OpenAi's GPT3.5 (ChatGPT) API. Usage use chat_gpt_rs::prelude::*; #[tokio::main] async fn main() { let token = Toke

Aiden 4 Mar 10, 2023
A Rust library for interacting with OpenAI's ChatGPT API, providing an easy-to-use interface and strongly typed structures.

ChatGPT Rust Library A Rust library for interacting with OpenAI's ChatGPT API. This library simplifies the process of making requests to the ChatGPT A

Arend-Jan 6 Mar 23, 2023
chatgpt from the command line 🤖💬

Talk to ChatGPT from your terminal. Quickstart First you'll need to install the CLI: cargo install chatgpt-cli Then, you'll need to make sure your ca

Juan Gonzalez 9 Mar 23, 2023