Ask the Terminal Anything (ATA): ChatGPT in the terminal

Overview

ata: Ask the Terminal Anything

ChatGPT in the terminal

asciicast

TIP:
Run a terminal with this tool in your background and show/hide it with a keypress.
This can be done via: Iterm2 (Mac), Guake (Ubuntu), scratchpad (i3/sway), or the quake mode for the Windows Terminal.

Productivity benefits

  • The terminal starts more quickly and requires less resources than a browser.
  • The keyboard shortcuts allow for quick interaction with the query. For example, press CTRL + c to cancel the stream, CTRL + ↑ to get the previous query again, and CTRL + w to remove the last word.
  • A terminal can be set to run in the background and show/hide with one keypress. To do this, use iTerm2 (Mac), Guake (Ubuntu), scratchpad (i3/sway), or the quake mode for the Windows Terminal.
  • The prompts are reproducible because each prompt is sent as a stand-alone prompt without history. Tweaking the prompt can be done by pressing CTRL + ↑ and making changes.

Usage

Download the binary for your system from Releases. If you're running Arch Linux, then you can use the AUR packages: ata, ata-git, or ata-bin.

To specify the API key and some basic model settings, start the application. It should give an error and the option to create a configuration file called ata.toml for you. Press y and ENTER to create a ata.toml file.

Next, request an API key via https://beta.openai.com/account/api-keys and update the key in the example configuration file.

For more information, see:

$ ata --help

FAQ

How much will I have to pay for the API?

Using OpenAI's API for chat is very cheap. Let's say that an average response is about 500 tokens, so costs $0.001. That means that if you do 100 requests per day, which is a lot, then that will cost you about $0.10 per day ($3 per month). OpenAI grants you $18.00 for free, so you can use the API for about 180 days (6 months) before having to pay.

Can I build the binary myself?

Yes, you can clone the repository and build the project via Cargo. Make sure that you have Cargo installed and then run:

$ git clone https://github.com/rikhuijzer/ata.git

$ cd ata/

$ cargo build --release

After this, your binary should be available at target/release/ata (Unix-based) or target/release/ata.exe (Windows).

Comments
  • Responses appear empty

    Responses appear empty

    $ ata --config=.config/ata.toml                                                                                                                                                                                         ~
    Ask the Terminal Anything
    
    model: text-davinci-003
    max_tokens: 500
    temperature: 0
    
    Prompt: 
    Hi, who are you
    
    Response: 
    
    
    Prompt: 
    Hi
    
    Response: 
    
    
    Prompt: 
    test
    
    Response: 
    
    
    Prompt: 
    
    
    opened by Strykar 5
  • Version 2.0

    Version 2.0

    This is going to be a bit of a strange pull request and I do apologize for that.

    There are many very good reasons you may not want this massive patch.

    Starting with the fact that I developed it entirely on my Android phone as the final project of a personal challenge to see how complicated of a patch I could actually write this way:

    Screenshot_20230303-103436

    I consider the test over now because there's really nothing to do beyond this; all it does is just take longer but I feel like I've proven you can do everything. Also wrote several websites and patched ffmpeg.

    Nevertheless, a lot of time did go into it. And I think my version is better because it has many features that yours or not:

    • You can provide any API key/value pair as of 2023-03-02
    • You can have multiple named configurations
    • It does not do the very unsafe thing of recommending that you write configuration to the current directory
    • You can provide no configuration at all and it will figure out where it could go and even offer to write you a skeleton.

    Either way have fun 😊

    opened by ctrlcctrlv 4
  • Current release for aarch64 on macOS has the wrong cpu type

    Current release for aarch64 on macOS has the wrong cpu type

    Hi,

    I downloaded ata-aarch64-apple-darwin and tried to run it in the Terminal and got -bash: /Users/plumps/Downloads/ata-aarch64-apple-darwin: Bad CPU type in executable as response.

    Checking the file type with file, showed that:

    file ~/Downloads/ata-aarch64-apple-darwin 
    /Users/plumps/Downloads/ata-aarch64-apple-darwin: Mach-O 64-bit executable **x86_64**
    
    opened by mi-skam 3
  • `invalid type` error occurred when attempting to set the temperature

    `invalid type` error occurred when attempting to set the temperature

    message: "invalid type: floating point 0.5, expected i64"

    It appears that setting a floating point temperature, such as 0.5 or 0.8, is not possible.

    opened by XOKP 1
  • Set the `max_tokens` default to 4096

    Set the `max_tokens` default to 4096

    Thanks to @marioseixas. I'm a bit hesitant to set it to Inf because that could mean that the tool could in theory produce in an infinitely long response which would cause an infinitely high bill. Very unlikely, but who knows. Having 4096 tokens printed to the terminal is already pretty long.

    • Closes #25.
    opened by rikhuijzer 0
  • max_tokens parameter

    max_tokens parameter

    Hello

    congratz on this awesome tool

    Please remove the max_tokens parameter, so it could default to (4096 - prompt tokens)

    You should not set the max_token, according to the official openAI API, the default value is Inf, and the model can process up to 4096 tokens of information.

    222904560-73acded3-006f-4b98-bd91-cba58d91e45a
    opened by marioseixas 0
  • Support `gpt-3.5-turbo`

    Support `gpt-3.5-turbo`

    Support the newly released chat API (released 1st of March 2023): gpt-3.5-turbo. This model is better than the text models according to Greg Brockman. Also, gpt-3.5-turbo is 10 times cheaper than text-davinci-003 per token. Due to the increased complexity in supporting text models and chat models and me having only so much time in the day, this PR drops support for text models such as text-davinci-003. To support those again, tests should be added to the repository which mock both the OpenAI API and stdout (see https://github.com/kkawakam/rustyline/issues/652#issuecomment-1421314307 for a stdout mock example).

    This PR is partially based on #21.

    • Closes #19
    opened by rikhuijzer 0
  • Allow running of chat models such as `gpt-3.5-turbo-0301`

    Allow running of chat models such as `gpt-3.5-turbo-0301`

    Currently:

    Ask the Terminal Anything
    
    model: gpt-3.5-turbo-0301
    max_tokens: 500
    temperature: 0.8
    
    Prompt:
    hi there
    
    Error:
    {
      "error": {
        "message": "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?",
        "type": "invalid_request_error",
        "param": "model",
        "code": null
      }
    }
    
    opened by rikhuijzer 0
  • Automatically retry `server_error`

    Automatically retry `server_error`

    This currently fails immediately:

    Error:
    {
      "error": {
        "message": "The server had an error while processing your request. Sorry about that!",
        "type": "server_error",
        "param": null,
        "code": null
      }
    }
    

    Given that this happens quite frequently in my experience and that it is solved by doing another request, ata should probably try again after a second

    opened by rikhuijzer 0
  • about issue #25 max_tokens parameter, you did not get the idea

    about issue #25 max_tokens parameter, you did not get the idea

    Hi

    I think you did a mistake in your last commit.

    Setting the max_tokens to 4096 will output always an error like this: Screenshot_2023-03-08_14-47-16

    I said for you to remove every mention on the code about max_tokens or tokens parameters so that it defaults to Inf (4096 - prompt tokens). Inf is not infinite, Inf equals to 4096 - prompt tokens, so that no one needs to keep changing the ata.toml to reduce or to increase the max_tokens limit. It will automatically make the calculation:

    223509914-002b1be8-5e13-4020-b638-20bcc2de9e41

    Who explained that to me was the awesome dev Jack Wong on a issue in his incredible app: https://github.com/jw-12138/davinci-web/issues/8

    opened by marioseixas 0
  • Support chat (with a chat history, that is, a conversation)

    Support chat (with a chat history, that is, a conversation)

    This should be very much doable. To implement it, add an extra handler for CTRL + L and make that clear the message history. The handler was already added in https://github.com/rikhuijzer/ata/pull/23.

    opened by rikhuijzer 1
  • Try reading `ata.toml` from some default paths

    Try reading `ata.toml` from some default paths

    This would make it easier to add the binary to the bin folder, place the config in some default path, and then run

    $ ata
    

    without extra flags.

    For now, this can be achieved by adding the following script to the bin folder (for example, ~/.local/bin):

    #!/usr/bin/env bash
    
    /path/to/ata/binary --config=/path/to/config/toml "$@"
    

    Thanks to "$@", this will accept arbitrary arguments to ata.

    opened by rikhuijzer 1
  • Cancel should have cooldown

    Cancel should have cooldown

    When canceling a prompt with CTRL + C, it is very tricky to ensure that the stream doesn't end at the same time. If that happens, ata will shut down.

    To solve this, there should be a timeout of a few seconds after the stream has finished in which CTRL + C is ignored.

    opened by rikhuijzer 0
Releases(v2.0.0)
Owner
Rik Huijzer
PhD student at University of Groningen. You can contact me at [email protected] or https://julialang.zulipchat.com
Rik Huijzer
A Rust library for interacting with OpenAI's ChatGPT API, providing an easy-to-use interface and strongly typed structures.

ChatGPT Rust Library A Rust library for interacting with OpenAI's ChatGPT API. This library simplifies the process of making requests to the ChatGPT A

Arend-Jan 6 Mar 23, 2023
Detect whether the current terminal supports rendering hyperlinks

Detects whether the current terminal supports hyperlinks in terminal emulators. It tries to detect and support all known terminals and terminal famili

Kat MarchΓ‘n 19 Sep 14, 2022
An NVIDIA SMI'esk GPU Monitoring tool for your terminal.

NVTOP An NVIDIA SMI'esk GPU Monitoring tool for your terminal. art by stable-diffusion + Maz Contents: usage prerequisites installation why troublesho

Jer 17 Oct 14, 2023
Ask ChatGPT for a shell script, code, or anything, directly from your terminal πŸ€–πŸ§ πŸ‘¨β€πŸ’»

ShellGPT Ask ChatGPT for a shell script, code, or anything, directly from your terminal ?? ?? ??‍?? Demo Install The binary is named gpt when installe

null 4 May 15, 2023
ask.sh: AI terminal assistant that can read and write your terminal directly!

ask.sh: AI terminal assistant that read from & write to your terminal ask.sh is an AI terminal assistant based on OpenAI APIs such as GPT-3.5/4! What'

hmirin 5 Jun 20, 2023
ChatGPT-rs is a lightweight ChatGPT client with a graphical user interface, written in Rust

ChatGPT-rs is a lightweight ChatGPT client with a graphical user interface, written in Rust. It allows you to chat with OpenAI's GPT models through a simple and intuitive interface.

null 7 Apr 2, 2023
Not the fastest terminal colors library. Don't even ask about size.

TROLOLORS Not the fastest terminal colors library. Don't even ask about size. Why? Don't even try to use it. But maybe you need to say to your boss th

Dmitriy Kovalenko 15 Oct 27, 2021
Achieve it! How you ask? Well, it's pretty simple; just use greatness!

Greatness! Achieve it! How you ask? Well, it's pretty simple; just use greatness! Disclaimer I do not believe that greatness is the best. It fits a me

Isacc Barker (Milo Banks) 107 Sep 28, 2022
The fastest way to identify any mysterious text or analyze strings from a file, just ask `lemmeknow` !

The fastest way to identify anything lemmeknow ⚑ Identify any mysterious text or analyze strings from a file, just ask lemmeknow. lemmeknow can be use

Swanand Mulay 594 Dec 30, 2022
Are all senior engineers busy? Ask senior instead!

Senior Are all senior engineers busy? Ask senior instead! How to install Requires: openssl a openAI api token rust cargo install senior or brew insta

Bruno Rucy Carneiro Alves de Lima 6 Aug 7, 2023
Ask questions, get insights from repos

?? RepoQuery ?? A REST service to answer user-queries about public GitHub repositories ?? The Project RepoQuery is an early-beta project, that uses re

OpenSauced 13 Aug 13, 2023
A CLI tool you can pipe code and then ask for changes, add documentation, etc, using the OpenAI API.

AiBro This is your own little coding bro, immersed in the world of AI, crypto, and all other types of over hyped tech trends. You can pipe it code and

Josh Bainbridge 5 Sep 5, 2023
A webring of people who make cool stuff. technology, music, art, writing, anything goes!

a webring of people who make cool stuff. technology, music, art, writing, anything goes!

Kognise 44 Dec 6, 2022
Donate To Charity (A.K.A. Public Goods) Without Lossing Anything

Donate To Charity (A.K.A. Public Goods) Without Lossing Anything

Gajesh Naik 10 Mar 3, 2022
A canvas on which you can draw anything with ease before drawing the pixels on your small hardware display.

embedded-canvas    canvas - a piece of cloth backed or framed as a surface for a painting NOTE: This crate is still in development and may have breaki

Lechev.space 13 Aug 31, 2022
Watch for changes on a webpage and do anything with it!

Sukurappa Watch for changes on a webpage and do anything with it! Install With cargo: cargo install sukurappa Or use the install-script and add $HOME/

Jean-Philippe Bidegain 2 Sep 4, 2022
Extend anything with WebAssembly.

Welcome! Please note: this project still under active development. It's usable, but expect some rough edges while work is underway. If you're interest

Extism 1.5k Jan 3, 2023
[WIP] Store bookmarks to anything

Handyman - store bookmarks to anything Handyman Acronym's Noticeably Dumb, Yet Makes A Name The motivation Internet browsers have bookmarks. File mana

Mateusz Koteja 2 Nov 8, 2022
Open source p2p share for devs to share anything with teammates across machines securely.

Secure share Share anything with teammates across machines via CLI. Share is a tool for secure peer-to-peer connections, enabling direct communication

Onboardbase 10 Aug 4, 2023
ChatGPT CLI - A minimal assistant in the terminal

ChatGPT CLI A lightweight ChatGPT CLI - Chat completion. Interact with ChatGPT from your terminal and save the conversation in a text file. Get starte

Imad E. 5 Mar 14, 2023