Stable Diffusion v1.4 ported to Rust's burn framework

Overview

Stable-Diffusion-Burn

Stable-Diffusion-Burn is a Rust-based project which ports the V1 stable diffusion model into the deep learning framework, Burn. This repository is licensed under the MIT Licence.

Support The Project

Stable-Diffusion-Burn is a passion project that is open and free to all. I want to empower everyone with reliable AI that can be run by ourselves on our own hardware to ensure that great AI is not limited to the hands of the few. If you support this vision consider supporting me so that I can continue on this journey and produce more projects such as Stable Diffusion XL in Rust.

You can show your support by buying a shirt at https://www.bonfire.com/machine-learning/. The shirt image was, of course, generated by my Rust powered Stable Diffusion! I'd love to release more projects and any support will help make that happen!

Any contribution would be greatly appreciated. Thanks!

How To Use

Step 1: Download the Model and Set Environment Variables

Start by downloading the SDv1-4.bin model provided on HuggingFace.

wget https://huggingface.co/Gadersd/Stable-Diffusion-Burn/resolve/main/V1/SDv1-4.bin

Next, set the appropriate CUDA version. It may be possible to run the model using wgpu without the need for torch in the future using cargo run --features wgpu-backend... but currently wgpu doesn't support buffer sizes large enough for Stable Diffusion.

export TORCH_CUDA_VERSION=cu113

Step 2: Run the Sample Binary

Invoke the sample binary provided in the rust code, as shown below:

# Arguments: <model_type(burn or dump)> <model> <unconditional_guidance_scale> <n_diffusion_steps> <prompt> <output_image>
cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img

This command will generate an image according to the provided prompt, which will be saved as 'img0.png'.

An image of an ancient mossy stone

Optional: Extract and Convert a Fine-Tuned Model

If users are interested in using a fine-tuned version of stable diffusion, the Python scripts provided in this project can be used to transform a weight dump into a Burn model file. Note: the tinygrad dependency should be installed from source rather than with pip.

# Step into the Python directory
cd python

# Download the model, this is just the base v1.4 model as an example
wget https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt

# Extract the weights
CPU=1 python3 dump.py sd-v1-4.ckpt

# Move the extracted weight folder out
mv params ..

# Step out of the Python directory
cd ..

# Convert the weights into a usable form
cargo run --release --bin convert params SDv1-4

The binaries 'convert' and 'sample' are contained in Rust. Convert works on CPU whereas sample needs CUDA.

Remember, convert should be used if you're planning on using the fine-tuned version of the stable diffusion.

License

This project is licensed under MIT license.

We wish you a productive time using this project. Enjoy!

You might also like...
Tangram is an automated machine learning framework designed for programmers.

Tangram Tangram is an automated machine learning framework designed for programmers. Run tangram train to train a model from a CSV file on the command

A Machine Learning Framework for High Performance written in Rust
A Machine Learning Framework for High Performance written in Rust

polarlight polarlight is a machine learning framework for high performance written in Rust. Key Features TBA Quick Start TBA How To Contribute Contrib

zenoh-flow aims at providing a zenoh-based data-flow programming framework for computations that span from the cloud to the device.

Eclipse Zenoh-Flow Zenoh-Flow provides a zenoh-based dataflow programming framework for computations that span from the cloud to the device. ⚠️ This s

High performance distributed framework for training deep learning recommendation models based on PyTorch.
High performance distributed framework for training deep learning recommendation models based on PyTorch.

PERSIA (Parallel rEcommendation tRaining System with hybrId Acceleration) is developed by AI platform@Kuaishou Technology, collaborating with ETH. It

Simple WIP GPGPU framework for Rust built on top of wgpu

gpgpu A simple GPU compute library based on wgpu. It is meant to be used alongside wgpu if desired. To start using gpgpu, just create a Framework inst

Accel: GPGPU Framework for Rust

Accel: GPGPU Framework for Rust crate crates.io docs.rs GitLab Pages accel CUDA-based GPGPU framework accel-core Helper for writing device code accel-

Multi-agent (path finding) planning framework

multi-agent (path finding) planning framework Mapf is a (currently experimental) Rust library for multi-agent planning, with a focus on cooperative pa

Machine learning framework for building object trackers and similarity search engines

Similari Similari is a framework that helps build sophisticated tracking systems. The most frequently met operations that can be efficiently implement

Framework and Language for Neuro-Symbolic Programming
Framework and Language for Neuro-Symbolic Programming

Scallop Scallop is a declarative language designed to support rich symbolic reasoning in AI applications. It is based on Datalog, a logic rule-based q

Comments
  • Failed to download pytorch zip

    Failed to download pytorch zip

    After I run in Windows 10:

    export TORCH_CUDA_VERSION=cu113
    cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img
    

    I get this error:

       Compiling torch-sys v0.13.0
    error: failed to run custom build command for `torch-sys v0.13.0`
    
    Caused by:
      process didn't exit successfully: `D:\dev\stable-diffusion-burn\target\release\build\torch-sys-9321d07214d78619\build-script-build` (exit code: 1)
      --- stdout
      cargo:rerun-if-env-changed=LIBTORCH_USE_PYTORCH
      cargo:rerun-if-env-changed=LIBTORCH
      cargo:rerun-if-env-changed=TORCH_CUDA_VERSION
    
      --- stderr
      Error: https://download.pytorch.org/libtorch/cu113/libtorch-win-shared-with-deps-2.0.0%2Bcu113.zip: status code 403
    

    When I try to visit that URL, I get AccessDenied error.

    opened by veniamin-ilmer 4
  • thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Torch(

    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Torch("Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend.

    Error Message:

    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Torch("Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].
    CPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterCPU.cpp:31034 [kernel]\nMeta: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterMeta.cpp:26824 [kernel]\nQuantizedCPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterQuantizedCPU.cpp:929 [kernel]\nBackendSelect: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\build\\build\\aten\\src\\ATen\\RegisterBackendSelect.cpp:726 [kernel]\nPython: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:144 [backend fallback]\nFuncTorchDynamicLayerBackMode: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\DynamicLayer.cpp:491 [backend fallback]\nFunctionalize: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\FunctionalizeFallbackKernel.cpp:280 [backend fallback]\nNamed: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\NamedRegistrations.cpp:7 [backend fallback]\nConjugate: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\ConjugateFallback.cpp:21 [kernel]\nNegative: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\native\\NegateFallback.cpp:23 [kernel]\nZeroTensor: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\ZeroTensorFallback.cpp:90 [kernel]\nADInplaceOrView: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\VariableFallbackKernel.cpp:63 [backend fallback]\nAutogradOther: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradCPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradCUDA: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradHIP: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradXLA: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradMPS: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradIPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradXPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradHPU: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradVE: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradLazy: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradMeta: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradMTIA: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradPrivateUse1: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradPrivateUse2: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradPrivateUse3: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nAutogradNestedTensor: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\VariableType_2.cpp:17488 [autograd kernel]\nTracer: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\torch\\csrc\\autograd\\generated\\TraceType_2.cpp:16726 [kernel]\nAutocastCPU: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\autocast_mode.cpp:487 [backend fallback]\nAutocastCUDA: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\autocast_mode.cpp:354 [backend fallback]\nFuncTorchBatched: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\LegacyBatchingRegistrations.cpp:815 [backend fallback]\nFuncTorchVmapMode: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\VmapModeRegistrations.cpp:28 [backend fallback]\nBatched: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\LegacyBatchingRegistrations.cpp:1073 [backend fallback]\nVmapMode: fallthrough registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\VmapModeRegistrations.cpp:33 [backend fallback]\nFuncTorchGradWrapper: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\TensorWrapper.cpp:210 [backend fallback]\nPythonTLSSnapshot: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:152 [backend fallback]\nFuncTorchDynamicLayerFrontMode: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\functorch\\DynamicLayer.cpp:487 [backend fallback]\nPythonDispatcher: registered at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\PythonFallbackKernel.cpp:148 [backend fallback]\n\nException raised from reportError at C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\aten\\src\\ATen\\core\\dispatch\\OperatorEntry.cpp:548 (most recent call first):\n00007FFD716ED24200007FFD716ED1E0 c10.dll!c10::Error::Error [<unknown file> @ <unknown line number>]\n00007FFD716B481500007FFD716B47A0 c10.dll!c10::NotImplementedError::NotImplementedError [<unknown file> @ <unknown line number>]\n00007FFD1585640800007FFD15856220 torch_cpu.dll!c10::impl::OperatorEntry::reportError [<unknown file> @ <unknown line number>]\n00007FFD1569607400007FFD15696020 torch_cpu.dll!c10::impl::OperatorEntry::lookup [<unknown file> @ <unknown line number>]\n00007FFD161AD9EA00007FFD1613C8B0 torch_cpu.dll!at::_ops::xlogy__Tensor::redispatch [<unknown file> @ <unknown line number>]\n00007FFD1626F90E00007FFD1626F820 torch_cpu.dll!at::_ops::empty_strided::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164B61FF00007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164B368800007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD15E7644300007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD161EA0EE00007FFD161E9EC0 torch_cpu.dll!at::_ops::empty_strided::call [<unknown file> @ <unknown line number>]\n00007FFD1570BC7600007FFD1570B790 torch_cpu.dll!at::TensorIteratorConfig::declare_static_shape [<unknown file> @ <unknown line number>]\n00007FFD15B7B10E00007FFD15B7AC70 torch_cpu.dll!at::native::_to_copy [<unknown file> @ <unknown line number>]\n00007FFD1670F38C00007FFD1670E0B0 torch_cpu.dll!at::compositeexplicitautograd::view_copy_symint_outf [<unknown file> @ <unknown line number>]\n00007FFD166EBA8200007FFD166A8730 torch_cpu.dll!at::compositeexplicitautograd::bucketize_outf [<unknown file> @ <unknown line number>]\n00007FFD15E7607800007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15EF598200007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15FB454C00007FFD15FB4470 torch_cpu.dll!at::_ops::_to_copy::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164AAA5200007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD164B301200007FFD1649ABA0 torch_cpu.dll!at::_ops::view_as_real::redispatch [<unknown file> @ <unknown line number>]\n00007FFD15E7607800007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15EF598200007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15FB454C00007FFD15FB4470 torch_cpu.dll!at::_ops::_to_copy::redispatch [<unknown file> @ <unknown line number>]\n00007FFD174D514100007FFD174AC610 torch_cpu.dll!torch::autograd::NotImplemented::~NotImplemented [<unknown file> @ <unknown line number>]\n00007FFD174FB66100007FFD174DE8E0 torch_cpu.dll!torch::autograd::GraphRoot::apply [<unknown file> @ <unknown line number>]\n00007FFD15E7607800007FFD15E56E40 torch_cpu.dll!at::TensorMaker::make_tensor [<unknown file> @ <unknown line number>]\n00007FFD15F28B6D00007FFD15F28920 torch_cpu.dll!at::_ops::_to_copy::call [<unknown file> @ <unknown line number>]\n00007FFD15B8160000007FFD15B810D0 torch_cpu.dll!at::native::to_dense_backward [<unknown file> @ <unknown line number>]\n00007FFD15B80ED900007FFD15B80DB0 torch_cpu.dll!at::native::to [<unknown file> @ <unknown line number>]\n00007FFD168C0D1800007FFD168BA880 torch_cpu.dll!at::compositeimplicitautograd::where [<unknown file> @ <unknown line number>]\n00007FFD168A9F1E00007FFD16860BE0 torch_cpu.dll!at::compositeimplicitautograd::broadcast_to_symint [<unknown file> @ <unknown line number>]\n00007FFD15FF9E7700007FFD15FE7940 torch_cpu.dll!at::_ops::zeros_out::redispatch [<unknown file> @ <unknown line number>]\n00007FFD160DA74600007FFD160DA4D0 torch_cpu.dll!at::_ops::to_dtype_layout::call [<unknown file> @ <unknown line number>]\n00007FFD1566335F00007FFD156631B0 torch_cpu.dll!at::Tensor::to [<unknown file> @ <unknown line number>]\n00007FF7E778D95000007FF7E778D8C0 sample.exe!atg_to [<unknown file> @ <unknown line number>]\n00007FF7E75FAE5C00007FF7E75FAE00 sample.exe!ZN3tch8wrappers16tensor_generated47_$LT$impl$u20$tch..wrappers..tensor..Tensor$GT$2to17h95f342b9efe0686bE [<unknown file> @ <unknown line number>]\n00007FF7E758375B00007FF7E7583720 sample.exe!ZN8burn_tch3ops10int_tensor165_$LT$impl$u20$burn_tensor..tensor..ops..int_tensor..IntTensorOps$LT$burn_tch..backend..TchBackend$LT$E$GT$$GT$$u20$for$u20$burn_tch..backend..TchBackend$LT$E$GT$$GT$13int_to_device17he788951f2ea20c64E [<unknown file> @ <unknown line number>]\n00007FF7E756C1D300007FF7E756C160 sample.exe!ZN126_$LT$stablediffusion..model..stablediffusion..StableDiffusion$LT$B$GT$$u20$as$u20$burn_core..module..base..Module$LT$B$GT$$GT$3map17h756fb4e94c93959aE [<unknown file> @ <unknown line number>]\n00007FF7E75CCA0C00007FF7E75CC5F0 sample.exe!ZN8burn_tch3ops4base15TchOps$LT$E$GT$8mean_dim17hf16dedd15676e654E [<unknown file> @ <unknown line number>]\n00007FF7E758FF8600007FF7E758FF80 sample.exe!ZN3std10sys_common9backtrace28__rust_begin_short_backtrace17h09841e2f04a0c80cE [<unknown file> @ <unknown line number>]\n00007FF7E75CE5AC00007FF7E75CE5A0 sample.exe!ZN3std2rt10lang_start28_$u7b$$u7b$closure$u7d$$u7d$17hdc39dbb179ca69ffE.llvm.15969601819475355906 [<unknown file> @ <unknown line number>]\n00007FF7E775AAA800007FF7E775A9F0 sample.exe!std::rt::lang_start_internal [/rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library\\std\\src\\rt.rs @ 148]\n00007FF7E75CD7AC00007FF7E75CD780 sample.exe!main [<unknown file> @ <unknown line number>]\n00007FF7E77D801000007FF7E77D7F04 sample.exe!__scrt_common_main_seh [D:\\a\\_work\\1\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 288]\n00007FFDB39226AD00007FFDB3922690 KERNEL32.DLL!BaseThreadInitThunk [<unknown file> @ <unknown line number>]\n00007FFDB4A2AA6800007FFDB4A2AA40 ntdll.dll!RtlUserThreadStart [<unknown file> @ <unknown line number>]\n")', 
    D:\rust\.cargo\registry\src\index.crates.io-6f17d22bba15001f\tch-0.13.0\src\wrappers\tensor_generated.rs:17243:27
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    error: process didn't exit successfully: `target\release\sample.exe burn SDv1-4 7.5 20 "An ancient mossy stone." img` (exit code: 101)
    

    Following the instructions in README.md

    1. downladed the .bin file
    2. set the environment variable
    $env:TORCH_CUDA_VERSION='cu113'
    
    1. Ran the sample command
    cargo run --release --bin sample burn SDv1-4 7.5 20 "An ancient mossy stone." img
    
    1. Encountered the error message above. I might be missing some dependency, but I would need guidance to understand what it is.

    OS: Windows 11 CPU: AMD Ryzen 9 7950X GPU: Nvidia GeForce RTX 4090 RAM: 64GB

    opened by Cody-Duncan 9
  • Cannot (easily) install python dependencies for

    Cannot (easily) install python dependencies for "Optional: Extract and Convert a Fine-Tuned Model"

    1. There's no setup.py to handle installing dependencies.

    2. dump.py requires tinygrad dependency. Result:

    Traceback (most recent call last):
      File "D:\repo\stable_diffusion\stable-diffusion-burn\python\dump.py", line 15, in <module>
        from tinygrad.tensor import Tensor
    ModuleNotFoundError: No module named 'tinygrad'
    

    Fix: install tinygrad dependency.

    1. Installing tinygrad via package manager pulls the wrong version (it pulls 0.6.0, but a required symbol isn't introduced until after that version). Result:
    Traceback (most recent call last):
      File "<REDACTED>...\stable-diffusion-burn\python\dump.py", line 17, in <module>
        from tinygrad.nn import Conv2d, Linear, GroupNorm, LayerNorm, Embedding
    ImportError: cannot import name 'Embedding' from 'tinygrad.nn' (D:\python\python311\Lib\site-packages\tinygrad\nn\__init__.py)
    

    Fix: install the absolute latest tinygrad from source

    cd stable-diffusion-burn\python
    git clone https://github.com/tinygrad/tinygrad.git
    cd tinygrad
    python3 -m pip install -e .
    
    1. dump.py uses tinygrad's 'extra' directory, which AFAICT isn't an installable package. Result:
    Traceback (most recent call last):
      File "<REDACTED>...\stable-diffusion-burn\python\dump.py", line 18, in <module>
        from extra.utils import download_file
    ModuleNotFoundError: No module named 'extra'
    

    Fix: copy 'extra' directory into stable-diffusion-burn\python

    #cd stable-diffusion-burn\python
    cp tinygrad/extra/* extra/
    
    1. It seems to work now, but not for a .safetensors file Result:
    > python dump.py  model.safetensors
    Traceback (most recent call last):
      File "<REDACTED>...\stable-diffusion-burn\python\dump.py", line 646, in <module>
        load_state_dict(model, torch_load(FILENAME)['state_dict'], strict=False)
                               ^^^^^^^^^^^^^^^^^^^^
      File "<REDACTED>...\tinygrad\tinygrad\state.py", line 115, in torch_load
        _, _, _, rwd, _, ids, base_offset = pkl.load(), pkl.load(), pkl.load(), f.tell(), pkl.load(), pkl.load(), f.tell()
                                            ^^^^^^^^^^
    _pickle.UnpicklingError: invalid load key, '`'.
    
    opened by Cody-Duncan 0
Owner
null
BURN: Burn Unstoppable Rusty Neurons

BURN BURN: Burn Unstoppable Rusty Neurons This library aims to be a complete deep learning framework with extreme flexibility written in Rust. The goa

Nathaniel Simard 730 Dec 26, 2022
pyke Diffusers is a modular Rust library for optimized Stable Diffusion inference 🔮

pyke Diffusers is a modular Rust library for pretrained diffusion model inference to generate images, videos, or audio, using ONNX Runtime as a backen

pyke 12 Jan 5, 2023
`dfx new --type=rust` + burn-rs MNIST web inference example

ic-mnist The frontend provides a canvas where users can draw a digit. The drawn digit is then sent to the backend canister running burn-rs for inferen

Marcin Nowak-Liebiediew 4 Jun 25, 2023
Cleora AI is a general-purpose model for efficient, scalable learning of stable and inductive entity embeddings for heterogeneous relational data.

Cleora Cleora is a genus of moths in the family Geometridae. Their scientific name derives from the Ancient Greek geo γῆ or γαῖα "the earth", and metr

Synerise 405 Dec 20, 2022
A stable, linearithmic sort in constant space written in Rust

A stable, linearithmic sort in constant space written in Rust. Uses the method described in "Fast Stable Merging And Sorting In Constant Extra Space"

Dylan MacKenzie 4 Mar 30, 2022
A Rust machine learning framework.

Linfa linfa (Italian) / sap (English): The vital circulating fluid of a plant. linfa aims to provide a comprehensive toolkit to build Machine Learning

Rust-ML 2.2k Jan 2, 2023
Open Machine Intelligence Framework for Hackers. (GPU/CPU)

Leaf • Introduction Leaf is a open Machine Learning Framework for hackers to build classical, deep or hybrid machine learning applications. It was ins

Autumn 5.5k Jan 1, 2023
Xaynet represents an agnostic Federated Machine Learning framework to build privacy-preserving AI applications.

xaynet Xaynet: Train on the Edge with Federated Learning Want a framework that supports federated learning on the edge, in desktop browsers, integrate

XayNet 196 Dec 22, 2022
Orkhon: ML Inference Framework and Server Runtime

Orkhon: ML Inference Framework and Server Runtime Latest Release License Build Status Downloads Gitter What is it? Orkhon is Rust framework for Machin

Theo M. Bulut 129 Dec 21, 2022
A fast, safe and easy to use reinforcement learning framework in Rust.

RSRL (api) Reinforcement learning should be fast, safe and easy to use. Overview rsrl provides generic constructs for reinforcement learning (RL) expe

Thomas Spooner 139 Dec 13, 2022