Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.

Overview

Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer


🔗 Download   |   Give it a Star |   Share it on Twitter 🐦

Features

  • 🚀 The power of AI in your computer
  • 💻 Local - it can work without internet
  • 🔒 Privacy first - your messages don't leave your computer
  • 🤫 Uncensored - you can talk whatever you want
  • 📖 Open source

Try the app

If you just want to get the app installers and try the app, go to secondbrain.sh.

How to use

The first time you open the app you will need to download a model, and then activate it.

Download a model

Secondbrain comes with some models ready to download, that we know work. You can check, or modify, the models.json to see their details.

You can also add your own model files to the /models folder, and then activate them from within Secondbrain app. The model needs to be in ggml format.

Activate the model

Just select the model and press "Activate model", and you are ready to start using the model.

The prompt is important

Language models are predictive machines, you throw some words(tokens actually) at them and they try to predict what is the most likely token to come after that, and after that new one, and so on. Not all the models work so smooth as ChatGPT, it depends on the pre-training, the fine-tuning, and the under-the-hood prompting.

When using models you need to take into account what format they understand better, for example Alpaca models were trained with this format:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

So, if you want to download and use your own models, take into account the prompt and change it in the configuration screen.

With foundational models, like Llama, things get crazy, there is no fine-tuning, so finally you can flex your prompt engineering skills and play around.

How to run from source

The app is build with Tauri, so basically you need to follow this guide: https://tauri.app/v1/guides/getting-started/prerequisites/

Techstack

Contribution

Yes, please. Just send your PRs.

Gracias 🤓

Contact

Hi! you can find me here @julioandresdev

You might also like...
Experimental Quantum Computer Simulator + Quantum Chess Implementation
Experimental Quantum Computer Simulator + Quantum Chess Implementation

Quantum Chess A somewhat hacky implementation of this paper (made in a week over a holiday). It's not heavily tested and probably has some bugs still

Simulator for the pioneering TX-2 computer

TX-2 Simulator We are trying to create a simulator for Lincoln Lab's historic TX-2 computer. Notably, this is the computer on which Ivan Sutherland's

VoceChat is a superlight rust written social server. Easy integration to your site/app.

Re-decentralized the Internet through personal cloud computing. VoceChat is the lightest chat server prioritizes private hosting! Easy integratation t

Rust, cargo and QEMU setup for multi-architecture OS development.

rust-osdev-jumpstart Rust, cargo and QEMU setup for multi-architecture OS development. Goal This repo should give you a boost in starting a bare-metal

Quickly set up a `probe-run` + `defmt` + `flip-link` embedded project

app-template Quickly set up a probe-run + defmt + flip-link embedded project Dependencies 1. flip-link: $ cargo install flip-link 2. probe-run: $ # ma

probe-run is a custom Cargo runner that transparently runs Rust firmware on an embedded device

probe-run Runs embedded programs just like native ones probe-run is a custom Cargo runner that transparently runs Rust firmware on an embedded device.

A repository for showcasing my knowledge of the Rust programming language, and continuing to learn the language.

Learning Rust I started learning the Rust programming language before using GitHub, but increased its usage afterwards. I have found it to be a fast a

Nyah is a programming language runtime built for high performance and comes with a scripting language.

🐱 Nyah ( Unfinished ) Nyah is a programming language runtime built for high performance and comes with a scripting language. 🎖️ Status Nyah is not c

Rust on ESP32 STD "Hello, World" app. A "Hello, world!" STD binary crate for the ESP32[XX] and ESP-IDF.

Rust on ESP32 STD "Hello, World" app A "Hello, world!" STD binary crate for the ESP32[XX] and ESP-IDF. This is the crate you get when running cargo ne

Comments
  • [BUG] Unknown tensor.

    [BUG] Unknown tensor.

    Here is a bug reported by Rodneyg on the Reddit thread:

    Very nice! I downloaded it and it downloaded the model fine but it's getting an error when I try to activate it. On new MBP M2 Max 32 GB memory.

    The model in use is Wizard Vicuna 13b Uncensored GGML, and the error reads as:

    Error: unknown tensor 'layers.1.attention_norm.weight' in ""

    opened by SabinStargem 1
  • [SUGGESTION] Add an API function?

    [SUGGESTION] Add an API function?

    This is Darth Gius's suggestion on the Reddit thread.

    Sure, what I'd like to add is a completion endpoint for my node.js app, most chatbots create this making a local server and printing a link (to not start every time the chatbot) and the external app sends the prompt and receives the output through the link, like here (I think, it's the Tauri page for the api), or this (uses openai api in python to generates answers), but I'm no expert on this, for now I can/prefer to send context+prompt directly to your app and start it every time I need an answer (I already have a nodejs code to start a python chatbot and send prompts there and receive outputs in nodejs, now I would have to understand how your code works and change the python chatbot with your app).

    On the rustformers github page I see that one of the commands to generate the answer is llm llama infer -m ggml-gpt4all-j-v1.3-groovy.bin -p "Rust is a cool programming language because", my basic idea for now is to change the Tauri app to let it do -p prompt, which receives from my code through the link or through a shared variable (if I don't use the link and start different times your app)

    opened by SabinStargem 0
  • [SUGGESTION] Check AI hardware requirements against current machine.

    [SUGGESTION] Check AI hardware requirements against current machine.

    It would be good that when looking at an AI model, that Second Brain would list the hardware recommendations and compare them against the user's machine.

    opened by SabinStargem 0
  • [SUGGESTION] The ability to specify hardware preference for AI where applicable.

    [SUGGESTION] The ability to specify hardware preference for AI where applicable.

    Apparently, some AI models have the ability to be used on both CPU and GPUs. Being able to select which hardware (and to what degree?) may be useful for some users.

    opened by SabinStargem 0
Make and use playgrounds locally.

cargo playground Cargo playground opens a local playground in the editor of your choice. Install You can install it directly using cargo $ cargo insta

null 18 Jan 3, 2023
CloudLLM is a Rust library designed to seamlessly bridge applications with remote Language Learning Models (LLMs) across various platforms.

CloudLLM CloudLLM is a Rust library designed to seamlessly bridge applications with remote Language Learning Models (LLMs) across various platforms. W

null 4 Oct 13, 2023
The Computer Language Benchmarks Game: Rust implementations

The Computer Language Benchmarks Game: Rust implementations This is the version I propose to the The Computer Language Benchmarks Game. For regex-dna,

Guillaume P. 69 Jul 11, 2022
A simple programming language, created for AP Computer Science

A simple programming language, created for AP Computer Science

Michelle S. 3 Sep 2, 2022
A tray application for Windows that gives you push notifications and instant downloads of new posts, messages and stories posted by models you subscribe to on Onlyfans.

OF-notifier A tray application for Windows that gives you push notifications and instant downloads of new posts, messages and stories posted by models

Gentlemen Mercenary 10 Dec 20, 2022
Download a single file from a Git repository.

git-download Microservices architecture requires sharing service definition files like in protocol buffer, for clients to access the server. To share

Akira Hayakawa 2 Jun 7, 2022
TypeRust - simple Rust playground where you can build or run your Rust code and share it with others

Rust playground Welcome to TypeRust! This is a simple Rust playground where you can build or run your Rust code and share it with others. There are a

Kirill Vasiltsov 28 Dec 12, 2022
ArbOS operating system, to run at Layer 2 on Arbitrum chains. Also a compiler for Mini, the language in which ArbOS is written.

ArbOS and Mini compiler ArbOS is the "operating system" that runs at Layer 2 on an Arbitrum chain, to manage the chain's operation, maintain security,

Offchain Labs 88 Nov 6, 2022
Simple RESTful API in rust created with actix-web. (Routing, models, JWT auth).

rust-simple-api Simple RESTful API created with rust, actix-web, Diesel, JWT. Running application Manual Firstly generate a secret.key which will be u

null 2 Jul 30, 2022
Simple rust interface to get derived analytical information of algorithmic market making models (M3).

af-rs Interact with the Portfolio protocol using Rust models to abstract the underlying pools. What we want: Given a uniswap pool with two tokens and

Primitive 5 Jul 11, 2023