37 Repositories
Rust llama-cpp-rs Libraries
ClangQL is a tool that allow you to run SQL-like query on C/C++ Code instead of database files using the GitQL SDK
ClangQL - Clang AST Query Language ClangQL is a tool that allow you to run SQL-like query on C/C++ Code instead of database files using the GitQL SDK.
The fastest CLI tool for prompting LLMs. Including support for prompting several LLMs at once!
cai - The fastest CLI tool for prompting LLMs Features Build with Rust 🦀 for supreme performance and speed! 🏎️ Support for models by Groq, OpenAI, A
A Rust LLaMA project to load, serve and extend LLM models
OpenLLaMA Overview A Rust LLaMA project to load, serve and extend LLM models. Key Objectives Support both GGML and HF(HuggingFace) models Support a st
Rust library for integrating local LLMs (with llama.cpp) and external LLM APIs.
Table of Contents About The Project Getting Started Roadmap Contributing License Contact A rust interface for the OpenAI API and Llama.cpp ./server AP
High-level, optionally asynchronous Rust bindings to llama.cpp
llama_cpp-rs Safe, high-level Rust bindings to the C++ project of the same name, meant to be as user-friendly as possible. Run GGUF-based large langua
A collection of serverless apps that show how Fermyon's Serverless AI
A collection of serverless apps that show how Fermyon's Serverless AI (currently in private beta) works. Reference: https://developer.fermyon.com/spin/serverless-ai-tutorial
Rust bindings to llama.cpp, using metal on macOS
llama-rs Rust bindings to llama.cpp, for macOS, with metal support, for testing and evaluating whether it would be worthwhile to run an Llama model lo
Rust library for whisper.cpp compatible Mel spectrograms
Mel Spec A Rust implementation of mel spectrograms aligned to the results from the whisper.cpp, pytorch and librosa reference implementations and suit
A mimimal Rust implementation of Llama.c
llama2.rs Rust meets llama. A mimimal Rust implementation of karpathy's llama.c. Currently the code uses the 15M parameter model provided by Karpathy
Open-source Rewind.ai clone written in Rust and Vue running 100% locally with whisper.cpp
mind-overflow Open-source Rewind.AI clone built with Tauri and Vue. Leverages whisper.cpp for Speech-to-Text and (wip: llama.cpp for Text generation a
OpenAI compatible API for serving LLAMA-2 model
Cria - Local llama OpenAI-compatible API The objective is to serve a local llama-2 model by mimicking an OpenAI API service. The llama2 model runs on
format whisper transcripts to .srt
whispersub A dead simple utility to format the output of OpenAI's whisper model (or whisper.cpp) into an .srt file. Usage whispersub input.txt -o outp
An interprocess message bus system built in Rust.
An interprocess message bus system built in Rust, which can be used to pass messages between multiple processes, even including kernel objects (HANDLE
Unofficial python bindings for the rust llm library. 🐍❤️🦀
llm-rs-python: Python Bindings for Rust's llm Library Welcome to llm-rs, an unofficial Python interface for the Rust-based llm library, made possible
A rusty interface to llama.cpp for rust
llama-cpp-rs Higher level API for the llama-cpp-sys library here: https://github.com/shadowmint/llama-cpp-sys/ A full end-to-end example can be found
LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!
LLaMa 7b in rust This repo contains the popular LLaMa 7b language model, fully implemented in the rust programming language! Uses dfdx tensors and CUD
Run LLaMA inference on CPU, with Rust 🦀🚀🦙
LLaMA-rs Do the LLaMA thing, but now in Rust 🦀 🚀 🦙 Image by @darthdeus, using Stable Diffusion LLaMA-rs is a Rust port of the llama.cpp project. Th
Rust+OpenCL+AVX2 implementation of LLaMA inference code
RLLaMA RLLaMA is a pure Rust implementation of LLaMA large language model inference.. Supported features Uses either f16 and f32 weights. LLaMA-7B, LL
Run LLaMA inference on CPU, with Rust 🦀🚀🦙
LLaMA-rs Do the LLaMA thing, but now in Rust 🦀 🚀 🦙 Image by @darthdeus, using Stable Diffusion LLaMA-rs is a Rust port of the llama.cpp project. Th
Believe in AI democratization. llama for nodejs backed by llama-rs, work locally on your laptop CPU. support llama/alpaca model.
llama-node Large Language Model LLaMA on node.js This project is in an early stage, the API for nodejs may change in the future, use it with caution.
A Discord bot, written in Rust, that generates responses using the LLaMA language model.
llamacord A Discord bot, written in Rust, that generates responses using the LLaMA language model. Built on top of llama-rs. Setup Model Obtain the LL
`ggllama` is a Rust port of ggerganov's llama.cpp.
Notice llama-rs beat me to the punch. I'll be contributing to that instead. The original README is preserved below. ggllama ggllama is a Rust port of
A Discord bot, written in Rust, that generates responses using the LLaMA language model.
llamacord A Discord bot, written in Rust, that generates responses using the LLaMA language model. Built on top of llama-rs. Setup Model Obtain the LL
A command line tool written in Rust and designed to be a modern build tool + package manager for C/C++ projects.
CCake CCake is a command line tool written in Rust and designed to be a modern build tool + package manager for C/C++ projects. Goals To be easily und
Using cxx to mix in Rust-code with a C++ application
Minimal application mixing C++ and Rust This example uses cxx to generate bindings between C++ and Rust, and integrates the two parts through CMake. I
Extend anything with WebAssembly.
Welcome! Please note: this project still under active development. It's usable, but expect some rough edges while work is underway. If you're interest
A new shellcode injection technique. Given as C++ header, standalone Rust program or library.
FunctionStomping Description This is a brand-new technique for shellcode injection to evade AVs and EDRs. This technique is inspired by Module Stompin
elite - c++17 converter
elite - c++17 converter
Work-in-progress Rust application that converts C++ header-only libraries to single self-contained headers.
unosolo Work-in-progress Rust application that converts C++ header-only libraries to single self-contained headers. Disclaimer This is my first Rust p
Rust FFI bindings for StarkWare's crypto-cpp library
starkware-crypto-rs Rust FFI bindings for StarkWare's crypto-cpp library Note that currently target x86_64-pc-windows-msvc is not supported. If you're
Azul - Desktop GUI framework
Azul - Desktop GUI framework Azul is a free, functional, reactive GUI framework for Rust and C++, built using the WebRender rendering engine and a CSS
A tool to decompile MSVC PDB files to C++ source code.
PDB Decompiler About Usage Contributing About A tool to decompile MSVC PDB files to C++ source code. This tool is a work in progress and will most lik
Use C++ libraries from Rust
ritual ritual allows to use C++ libraries from Rust. It analyzes the C++ API of a library and generates a fully-featured crate that provides convenien
This is a maintained rust project that exposes the cpp driver at cpp-driver in a somewhat-sane crate.
cassandra-rs This is a maintained rust project that exposes the cpp driver at https://github.com/datastax/cpp-driver/ in a somewhat-sane crate. For th
将 C/C++ 代码转换成流程图 / Turn your C/C++ code into flowchart
cxx2flow 将 C/C++ 代码转换为流程图 效果 更多效果图请参考 GALLERY 两种样式: 折线 平滑 安装 自行编译 cargo install cxx2flow 下载预构建二进制 可以到 GitHub Actions 或 Nightly.link 下载最新构建的二进制,包含 Linu
Fegeya Gretea (aka green tea), new generation programming language.
Fegeya Gretea Gretea (aka green tea), new generation programming language. A taste of Gretea's syntax: import tea.green.fmt module hello { fn hel
High-performance automatic differentiation of LLVM.
The Enzyme High-Performance Automatic Differentiator of LLVM Enzyme is a plugin that performs automatic differentiation (AD) of statically analyzable