Stable Diffusion XL ported to Rust's burn framework

Overview

Stable-Diffusion-XL-Burn

Stable-Diffusion-XL-Burn is a Rust-based project which ports stable diffusion xl into the Rust deep learning framework burn. This repository is licensed under the MIT Licence.

How To Use

Step 1: Download the Model and Set Environment Variables

The model files must be in burn's format. A later section explains how you can convert any SDXL model to burn's format. Start by downloading the pre-converted SDXL1.0 model files provided on HuggingFace.

wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/diffuser.bin -P ./SDXL1.0/
wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/diffuser.cfg -P ./SDXL1.0/
wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/embedder.bin -P ./SDXL1.0/
wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/embedder.cfg -P ./SDXL1.0/
wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/latent_decoder.bin -P ./SDXL1.0/
wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/latent_decoder.cfg -P ./SDXL1.0/

# if you want to use the refiner
wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/refiner.bin -P ./SDXL1.0/
wget https://huggingface.co/Gadersd/stable-diffusion-xl-burn/resolve/main/SDXL1.0/refiner.cfg -P ./SDXL1.0/

Step 2: Run the Sample Binary

Invoke the sample binary provided in the rust code. You will need a CUDA GPU with at least 10 GB of VRAM.

export TORCH_CUDA_VERSION=cu113
# Arguments: <model> <refiner(y/n)> <unconditional_guidance_scale> <n_diffusion_steps> <prompt> <output_image>
cargo run --release --bin sample SDXL1.0 n 7.5 30 "An elegant bright red crab." crab

This command will generate an image according to the provided prompt, which will be saved as 'crab0.png'.

An image of an ancient mossy stone

Converting Model Files to Burn's Format

The scripts in the python directory dump safetensor weights that can be converted to a format burn can load. Follow the instructions at https://github.com/Stability-AI/generative-models to install Stability AI's python SDXL repo. Then copy the scripts in the python diretory of this project into the generative-models folder and execute

python3 dump.py

A params folder will be created containing the dumped weights. Move the params folder to the root folder of this project and run

cargo run --release --bin convert params
mkdir SDXL
mv *.bin SDXL

Copy the .cfg files into the SDXL folder which can be downloaded with wget as shown in the download section. Now the models can be sampled as demonstrated above.

License

This project is licensed under MIT license.

We wish you a productive time using this project. Enjoy!

You might also like...
Tangram is an automated machine learning framework designed for programmers.

Tangram Tangram is an automated machine learning framework designed for programmers. Run tangram train to train a model from a CSV file on the command

A Machine Learning Framework for High Performance written in Rust
A Machine Learning Framework for High Performance written in Rust

polarlight polarlight is a machine learning framework for high performance written in Rust. Key Features TBA Quick Start TBA How To Contribute Contrib

zenoh-flow aims at providing a zenoh-based data-flow programming framework for computations that span from the cloud to the device.

Eclipse Zenoh-Flow Zenoh-Flow provides a zenoh-based dataflow programming framework for computations that span from the cloud to the device. ⚠️ This s

High performance distributed framework for training deep learning recommendation models based on PyTorch.
High performance distributed framework for training deep learning recommendation models based on PyTorch.

PERSIA (Parallel rEcommendation tRaining System with hybrId Acceleration) is developed by AI platform@Kuaishou Technology, collaborating with ETH. It

Simple WIP GPGPU framework for Rust built on top of wgpu

gpgpu A simple GPU compute library based on wgpu. It is meant to be used alongside wgpu if desired. To start using gpgpu, just create a Framework inst

Accel: GPGPU Framework for Rust

Accel: GPGPU Framework for Rust crate crates.io docs.rs GitLab Pages accel CUDA-based GPGPU framework accel-core Helper for writing device code accel-

Multi-agent (path finding) planning framework

multi-agent (path finding) planning framework Mapf is a (currently experimental) Rust library for multi-agent planning, with a focus on cooperative pa

Machine learning framework for building object trackers and similarity search engines

Similari Similari is a framework that helps build sophisticated tracking systems. The most frequently met operations that can be efficiently implement

Framework and Language for Neuro-Symbolic Programming
Framework and Language for Neuro-Symbolic Programming

Scallop Scallop is a declarative language designed to support rich symbolic reasoning in AI applications. It is based on Datalog, a logic rule-based q

Comments
  • TORCH_CUDA_VERSION=cu113 fails to download

    TORCH_CUDA_VERSION=cu113 fails to download

    Using export TORCH_CUDA_VERSION=cu113 I get an error downloading https://download.pytorch.org/libtorch/cu113/libtorch-cxx11-abi-shared-with-deps-2.0.0%2Bcu113.zip.

    The route is status code 403.

    opened by danielclough 1
  • How can we add negative prompts?

    How can we add negative prompts?

    I can not figure out how to add negative prompts. I would be excited to learn how to send a PR for that if you like.

    Can you suggest some reading to help me learn about what's going on here?

    I have worked in Rust a bit. I need to understand the torch part.

    opened by danielclough 1
  • Support .safetensors format for models

    Support .safetensors format for models

    The third-party SD ecosystem has mostly shifted to the safetensors format for distributing models, as the Python pickle format allows for arbitrary code execution on deserialization. Safetensors is safe, and easy and fast to load as well.

    opened by jdahlstrom 0
Owner
null
BURN: Burn Unstoppable Rusty Neurons

BURN BURN: Burn Unstoppable Rusty Neurons This library aims to be a complete deep learning framework with extreme flexibility written in Rust. The goa

Nathaniel Simard 730 Dec 26, 2022
pyke Diffusers is a modular Rust library for optimized Stable Diffusion inference 🔮

pyke Diffusers is a modular Rust library for pretrained diffusion model inference to generate images, videos, or audio, using ONNX Runtime as a backen

pyke 12 Jan 5, 2023
`dfx new --type=rust` + burn-rs MNIST web inference example

ic-mnist The frontend provides a canvas where users can draw a digit. The drawn digit is then sent to the backend canister running burn-rs for inferen

Marcin Nowak-Liebiediew 4 Jun 25, 2023
Cleora AI is a general-purpose model for efficient, scalable learning of stable and inductive entity embeddings for heterogeneous relational data.

Cleora Cleora is a genus of moths in the family Geometridae. Their scientific name derives from the Ancient Greek geo γῆ or γαῖα "the earth", and metr

Synerise 405 Dec 20, 2022
A stable, linearithmic sort in constant space written in Rust

A stable, linearithmic sort in constant space written in Rust. Uses the method described in "Fast Stable Merging And Sorting In Constant Extra Space"

Dylan MacKenzie 4 Mar 30, 2022
A Rust machine learning framework.

Linfa linfa (Italian) / sap (English): The vital circulating fluid of a plant. linfa aims to provide a comprehensive toolkit to build Machine Learning

Rust-ML 2.2k Jan 2, 2023
Open Machine Intelligence Framework for Hackers. (GPU/CPU)

Leaf • Introduction Leaf is a open Machine Learning Framework for hackers to build classical, deep or hybrid machine learning applications. It was ins

Autumn 5.5k Jan 1, 2023
Xaynet represents an agnostic Federated Machine Learning framework to build privacy-preserving AI applications.

xaynet Xaynet: Train on the Edge with Federated Learning Want a framework that supports federated learning on the edge, in desktop browsers, integrate

XayNet 196 Dec 22, 2022
Orkhon: ML Inference Framework and Server Runtime

Orkhon: ML Inference Framework and Server Runtime Latest Release License Build Status Downloads Gitter What is it? Orkhon is Rust framework for Machin

Theo M. Bulut 129 Dec 21, 2022
A fast, safe and easy to use reinforcement learning framework in Rust.

RSRL (api) Reinforcement learning should be fast, safe and easy to use. Overview rsrl provides generic constructs for reinforcement learning (RL) expe

Thomas Spooner 139 Dec 13, 2022