Open Machine Intelligence Framework for Hackers. (GPU/CPU)

Overview

Leaf • Join the chat at https://gitter.im/autumnai/leaf Build Status Crates.io License

Introduction

Leaf is a open Machine Learning Framework for hackers to build classical, deep or hybrid machine learning applications. It was inspired by the brilliant people behind TensorFlow, Torch, Caffe, Rust and numerous research papers and brings modularity, performance and portability to deep learning.

Leaf has one of the simplest APIs, is lean and tries to introduce minimal technical debt to your stack.

See the [Leaf - Machine Learning for Hackers][leaf-book] book for more.

Leaf is a few months old, but thanks to its architecture and Rust, it is already one of the fastest Machine Intelligence Frameworks available.

See more Deep Neural Networks benchmarks on [Deep Learning Benchmarks][deep-learning-benchmarks-website].

Leaf is portable. Run it on CPUs, GPUs, and FPGAs, on machines with an OS, or on machines without one. Run it with OpenCL or CUDA. Credit goes to Collenchyma and Rust.

Leaf is part of the [Autumn][autumn] Machine Intelligence Platform, which is working on making AI algorithms 100x more computational efficient.

We see Leaf as the core of constructing high-performance machine intelligence applications. Leaf's design makes it easy to publish independent modules to make e.g. deep reinforcement learning, visualization and monitoring, network distribution, automated preprocessing or scaleable production deployment easily accessible for everyone.

Disclaimer: Leaf is currently in an early stage of development. If you are experiencing any bugs with features that have been implemented, feel free to create a issue.

Getting Started

Documentation

To learn how to build classical, deep or hybrid machine learning applications with Leaf, check out the [Leaf - Machine Learning for Hackers][leaf-book] book.

For additional information see the Rust API Documentation or the [Autumn Website][autumn].

Or start by running the Leaf examples.

We are providing a Leaf examples repository, where we and others publish executable machine learning models build with Leaf. It features a CLI for easy usage and has a detailed guide in the project README.md.

Leaf comes with an examples directory as well, which features popular neural networks (e.g. Alexnet, Overfeat, VGG). To run them on your machine, just follow the install guide, clone this repoistory and then run

# The examples currently require CUDA support.
cargo run --release --no-default-features --features cuda --example benchmarks alexnet

Installation

Leaf is build in Rust. If you are new to Rust you can install Rust as detailed here. We also recommend taking a look at the official Rust - Getting Started Guide.

To start building a machine learning application (Rust only for now. Wrappers are welcome) and you are using Cargo, just add Leaf to your Cargo.toml:

[dependencies]
leaf = "0.2.1"

If you are on a machine that doesn't have support for CUDA or OpenCL you can selectively enable them like this in your Cargo.toml:

[dependencies]
leaf = { version = "0.2.1", default-features = false }

[features]
default = ["native"] # include only the ones you want to use, in this case "native"
native  = ["leaf/native"]
cuda    = ["leaf/cuda"]
opencl  = ["leaf/opencl"]

More information on the use of feature flags in Leaf can be found in FEATURE-FLAGS.md

Contributing

If you want to start hacking on Leaf (e.g. adding a new Layer) you should start with forking and cloning the repository.

We have more instructions to help you get started in the CONTRIBUTING.md.

We also has a near real-time collaboration culture, which happens here on Github and on the Leaf Gitter Channel.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as below, without any additional terms or conditions.

Ecosystem / Extensions

We designed Leaf and the other crates of the [Autumn Platform][autumn] to be as modular and extensible as possible. More helpful crates you can use with Leaf:

  • Cuticula: Preprocessing Framework for Machine Learning
  • Collenchyma: Portable, HPC-Framework on any hardware with CUDA, OpenCL, Rust

Support / Contact

  • With a bit of luck, you can find us online on the #rust-machine-learning IRC at irc.mozilla.org,
  • but we are always approachable on Gitter/Leaf
  • For bugs and feature request, you can create a Github issue
  • For more private matters, send us email straight to our inbox: [email protected]
  • Refer to [Autumn][autumn] for more information

Changelog

You can find the release history at the CHANGELOG.md. We are using Clog, the Rust tool for auto-generating CHANGELOG files.

License

Licensed under either of

at your option.

Comments
  • Remove redundant .enumerate() calls and fix cargo bench compilation

    Remove redundant .enumerate() calls and fix cargo bench compilation

    I've refactured and fixed some trivial things while reading the code.

    cargo bench compiles but panics in runtime, I think it should be easy to fix but it looks like benches are deprecated.

    opened by alexandermorozov 14
  • perf/solver: don't zero fill weight gradients

    perf/solver: don't zero fill weight gradients

    Please verify this PR! I'm not completely sure that it's correct.

    Weight gradients aren't used before they are overwritten at backpropagation step, so initialization is redundant.

    If #89 is applied, this patch improves performance 30% on leaf-examples mnist.

    opened by alexandermorozov 11
  • [WIP] The 'Leaf: Machine Learning for hackers' book

    [WIP] The 'Leaf: Machine Learning for hackers' book

    This PR is a work in progress. So far I added the 1. Leaf and 2. Layers chapters.

    Feedback for the overall structure of the book as well as the general ideas and style of the first two chapters are highly welcome.

    There are probably a lot of typos and grammar mistakes in there. You don't have to bother yet pointing them out, the text might change quickly.

    REFERENCE: #91

    opened by MichaelHirn 10
  • Create Leaf Book / high-level documentation

    Create Leaf Book / high-level documentation

    I feel like, the entrance barrier for (Rust) Developers to engage with Leaf and Machine Learning, for contributions and hacking-away purposes, is still far too high. Partly because the concepts of Machine Learning (Deep Learning) are not yet widely known and partly because not many are familiar with the general design of a Machine Learning framework - compared e.g. to the general design of a Web framework.

    The Leaf Book, should provide a practical introduction to Deep Learning for developers. Explain the easy Leaf API and provide examples for popular use-cases like adding a new Layer, Machine Learning across multiple devices and co. After reading it, a developer should feel comfortable hacking on Leaf, even if she has no prior knowledge about Deep Learning ( Deep Learning is really easy).

    @hobofan pointed out in #45 interactive documentation for Layers. I am not sure to what extent they can be provided here with the Leaf Book. For the interactive layer documentation, I have something with Jupyter in mind, which would require a Rust kernel first, though. But other options for interactive layer documentation are welcome.

    For the book I am trying mdBook as it gives a nice layout and allows it to place the book inside the leaf project. Feedback on the choice is welcome.

    docs contributing 
    opened by MichaelHirn 8
  • The `phloem` crate (currently used by `leaf`) seems to be deprecated in favour of collenchyma's `SharedTensor`

    The `phloem` crate (currently used by `leaf`) seems to be deprecated in favour of collenchyma's `SharedTensor`

    I also noticed that the collenchyma crate version seemed to be a little behind - I figure you must be busy with other work at the moment, but I just thought I'd ask what your plans are for updating. Are you currently waiting on developments upstream? Or has this simply not been updated yet due to lack of time?

    question 
    opened by mitchmindtree 7
  • fix/sgd: initialize weight gradient history with zeroes

    fix/sgd: initialize weight gradient history with zeroes

    SGD solver used unintialized history tensors. If there were some NaNs then whole network got poisoned after the first generation even if momentum was set to zero. This patch prefills gradient history with zeros.

    FIX: autumnai/leaf-examples#13

    opened by alexandermorozov 6
  • feat/activations: add in-place activations

    feat/activations: add in-place activations

    Activations can now be calculated in-place, requiring less memory. To use it, the same blob name should be supplied as input and output to a activation layer.

    Example:

    // set up linear1 layer
    let linear1_cfg = LinearConfig { output_size: 1568 };
    let mut lnr1_cfg = LayerConfig::new("linear1", LayerType::Linear(linear1_cfg));
    lnr1_cfg.add_input("data");
    lnr1_cfg.add_output("linear1_out");
    net_cfg.add_layer(lnr1_cfg);
    // set up sigmoid layer
    let mut sigmoid_cfg = LayerConfig::new("sigmoid", LayerType::Sigmoid);
    sigmoid_cfg.add_input("linear1_out"); // same input and output
    sigmoid_cfg.add_output("linear1_out"); // same input and output
    net_cfg.add_layer(sigmoid_cfg);
    
    opened by hobofan 6
  • style/travis: Allow travis to build on container based system

    style/travis: Allow travis to build on container based system

    Added required apt-packages Added ability to run clippy as feature in nightly set sudo false in travis

    Several remaining compile fixes remain, but will be resolved with upcoming fix with 0.2 code.

    opened by dirvine 6
  • rust does not compile rblas which is a depency of leaf

    rust does not compile rblas which is a depency of leaf

    I am all new to rust so I apologize if this is useless:

    # rustc -V
    rustc 1.7.0-beta.3 (36237fc61 2016-02-11)
    

    I tried this with beta 1.7 and stable 1.6, both yield the very same result

    # cargo build
       Compiling byteorder v0.4.2
       Compiling rustc-serialize v0.3.18
       Compiling libc v0.1.12
       Compiling bitflags v0.3.3
       Compiling libc v0.2.7
       Compiling rblas v0.0.10
    /home/bernhard/.cargo/registry/src/github.com-88ac128001ac3a9a/libc-0.1.12/rust/src/liblibc/lib.rs:81:21: 81:39 warning: lint raw_pointer_derive has been removed: using derive with raw pointers is ok
    /home/bernhard/.cargo/registry/src/github.com-88ac128001ac3a9a/libc-0.1.12/rust/src/liblibc/lib.rs:81 #![allow(bad_style, raw_pointer_derive)]
                                                                                                                              ^~~~~~~~~~~~~~~~~~
       Compiling rand v0.3.14
       Compiling log v0.3.5
       Compiling num v0.1.31
       Compiling enum_primitive v0.1.0
    /home/bernhard/.cargo/registry/src/github.com-88ac128001ac3a9a/rblas-0.0.10/src/lib.rs:4:1: 4:27 error: #[feature] may not be used on the beta release channel
    /home/bernhard/.cargo/registry/src/github.com-88ac128001ac3a9a/rblas-0.0.10/src/lib.rs:4 #![feature(concat_idents)]
                                                                                             ^~~~~~~~~~~~~~~~~~~~~~~~~~
    error: aborting due to previous error
    Could not compile `rblas`.
    
    To learn more, run the command again with --verbose.
    
    

    Is this a known issue?

    duplicate 
    opened by drahnr 5
  • some tidy up and compilation fixes.

    some tidy up and compilation fixes.

    Some commented code here, which was already unimplemented (after unimplemented block). Also some slight changes to travis to allow run on container and added clippy as feature to allow run form command line, perhaps easier.

    opened by dirvine 5
  • Fix Grammar in Main Crate Documentation

    Fix Grammar in Main Crate Documentation

    After reading the documentation I noticed the sentence

    The operations can run on different Backends {CPU, GPU} and must not be defined at compile time, which allows for easy backend swapping.

    In English, this means that the user is not allowed to specify the backend at compile time. That sound a bit weird, and reads to me as if you wanted the equivalent of the German "braucht nicht" (having since learnt you are a startup in Berlin), so I replaced that with "doesn't have to". (Also, my editor re-wrapped some lines.)

    Just close this PR if that assumption was wrong.

    opened by killercup 5
  • cannot run example files in leaf

    cannot run example files in leaf

    Hi All, I am new here. I have cuda and cuda nn installed with paths setup. I cannot run the example files for some reason. Am I missing something? System is Ubuntu 16.04 `akhileshsk@akhileshsk-home:~$ cd rust_leaf_tutorials/leaf akhileshsk@akhileshsk-home:~/rust_leaf_tutorials/leaf$ cargo run --release --no-default-features --features cuda --example benchmarks alexnet Compiling linear-map v0.0.4 Compiling num-traits v0.1.37 Compiling pkg-config v0.3.9 Compiling winapi v0.2.8 Compiling byteorder v0.4.2 Compiling lazy_static v0.1.16 Compiling rustc-serialize v0.3.24 Compiling utf8-ranges v0.1.3 Compiling libc v0.2.23

    Compiling log v0.3.7 Compiling rblas v0.0.11 Compiling capnp v0.6.2 Compiling winapi-build v0.1.1 Compiling regex-syntax v0.3.9 Compiling bitflags v0.3.3 Compiling num-integer v0.1.34 Compiling enum_primitive v0.1.1 Compiling rand v0.3.15 Compiling kernel32-sys v0.2.2 Compiling time v0.1.37 Compiling memchr v0.1.11 Compiling num-iter v0.1.33 Compiling cublas-sys v0.1.0 Compiling cudnn-sys v0.0.3 Compiling thread-id v2.0.0 Compiling thread_local v0.2.7 Compiling aho-corasick v0.5.3 Compiling cudnn v1.3.1 Compiling cublas v0.2.0 Compiling timeit v0.1.2 Compiling capnpc v0.6.2 Compiling num-bigint v0.1.37 Compiling num-complex v0.1.37 Compiling regex v0.1.80 Compiling num-rational v0.1.36 Compiling num v0.1.37 Compiling collenchyma v0.0.8 Compiling collenchyma-blas v0.2.0 Compiling collenchyma-nn v0.3.4 Compiling leaf v0.2.1 (file:///home/akhileshsk/rust_leaf_tutorials/leaf) Compiling leaf v0.2.1 error[E0004]: non-exhaustive patterns: &mut Cuda(_) not covered --> /home/akhileshsk/.cargo/registry/src/github.com-1ecc6299db9ec823/leaf-0.2.1/src/util.rs:28:11 | 28 | match mem { | ^^^ pattern &mut Cuda(_) not covered

    error[E0004]: non-exhaustive patterns: &Cuda(_) not covered --> /home/akhileshsk/.cargo/registry/src/github.com-1ecc6299db9ec823/leaf-0.2.1/src/solvers/mod.rs:77:24 | 77 | match result.get(native.device()).unwrap() { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pattern &Cuda(_) not covered

    error: aborting due to 2 previous errors

    error: Could not compile leaf. Build failed, waiting for other jobs to finish... error: build failed akhileshsk@akhileshsk-home:~/rust_leaf_tutorials/leaf$ akhileshsk@akhileshsk-home:~/rust_leaf_tutorials/leaf$ ^C akhileshsk@akhileshsk-home:~/rust_leaf_tutorials/leaf$ `

    opened by aspk 3
  • Provide docker image and Dockerfile

    Provide docker image and Dockerfile

    It would be great to provide the docker image and Dockerfile for newbie developers.

    We can also use that container to run leaf in the cloud much more easily.

    opened by tobegit3hub 1
  • build error with rustc 1.12

    build error with rustc 1.12

    I have got build error with rustc 1.12 on debian stretch.

    $ RUST_BACKTRACE=1 cargo build
       Compiling leaf v0.2.1 (file:///home/<user>/work/leaf)
       Compiling num-rational v0.1.35
    Build failed, waiting for other jobs to finish...
    error: failed to run custom build command for `leaf v0.2.1 (file:///home/<user>/work/leaf)`
    Process didn't exit successfully: `/home/<user>/work/leaf/target/debug/build/leaf-87f43ad88e5c8bb2/build-script-build` (exit code: 101)
    --- stderr
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { kind: Failed, description: "Error while trying to execute `capnp compile`: Failed: No such file or directory (os error 2).  Please verify that version 0.5.2 or higher of the capnp executable is installed on your system. See https://capnproto.org/install.html" }', src/libcore/result.rs:788
    stack backtrace:
       1:     0x55e338350a6b - std::sys::backtrace::tracing::imp::write::h00e948915d1e4c72
       2:     0x55e338355ebf - std::panicking::default_hook::_{{closure}}::h7b8a142818383fb8
       3:     0x55e33835511d - std::panicking::default_hook::h41cf296f654245d7
       4:     0x55e3383557d5 - std::panicking::rust_panic_with_hook::h4cbd7ca63ce1aee9
       5:     0x55e338355622 - std::panicking::begin_panic::h93672d0313d5e8e9
       6:     0x55e338355591 - std::panicking::begin_panic_fmt::hd0daa02942245d81
       7:     0x55e338355511 - rust_begin_unwind
       8:     0x55e33838c97f - core::panicking::panic_fmt::hbfc935564d134c1b
       9:     0x55e3382950c2 - core::result::unwrap_failed::h14ae321fa6665b8d
                            at /build/rustc-1.12.0+dfsg1/src/libcore/result.rs:29
      10:     0x55e338294251 - _<core..result..Result<T, E>>::unwrap::hbe110bf6bea72ca6
                            at /build/rustc-1.12.0+dfsg1/src/libcore/result.rs:726
      11:     0x55e338297306 - build_script_build::main::he5071e1b00fcfe0e
                            at /home/<user>/work/leaf/build.rs:4
      12:     0x55e33835d896 - __rust_maybe_catch_panic
      13:     0x55e33835480d - std::rt::lang_start::h53bf99b0829cc03c
      14:     0x55e3382974e3 - main
      15:     0x7fa0f6ae52b0 - __libc_start_main
      16:     0x55e3382935e9 - _start
      17:                0x0 - <unknown>
    

    What should I see to make this work?

    opened by jinserk 2
  • Current status of leaf?

    Current status of leaf?

    From this post:

    Which is why Max and I will suspend the development of Leaf and focus on new ventures.

    Is this accurate? If so, should the README mention so?

    Autumn is also still listed on http://rust-lang.org/friends.html, is Autumn still a thing? If so, will the company continue to use Rust?

    opened by anp 45
  • cannot compile the example

    cannot compile the example

    Hi, all I try to compile and run leaf example with the following command, cargo run --release --no-default-features --features cuda --example benchmarks alexnet but got errors like this

    In function `convolution_descriptor::ConvolutionDescriptor::new::hbe3406e228523108khb':
    cudnn.0.rs:(.text._ZN22convolution_descriptor21ConvolutionDescriptor3new20hbe3406e228523108khbE+0x29c): undefined reference to `cudnnSetConvolutionNdDescriptor_v3'
    cudnn.0.rs:(.text._ZN22convolution_descriptor21ConvolutionDescriptor3new20hbe3406e228523108khbE+0x346): undefined reference to `cudnnSetConvolutionNdDescriptor_v3'
    cudnn.0.rs:(.text._ZN22convolution_descriptor21ConvolutionDescriptor3new20hbe3406e228523108khbE+0x3a2): undefined reference to `cudnnSetConvolutionNdDescriptor_v3'
    

    I am using rustc 1.8.0 with cargo 0.10.0.

    please give me some hints.

    thanks

    opened by jianingy 5
Owner
Autumn
Autumn
Damavand is a quantum circuit simulator. It can run on laptops or High Performance Computing architectures, such CPU distributed architectures or multi GPU distributed architectures.

Damavand is a code that simulates quantum circuits. In order to learn more about damavand, refer to the documentation. Development status Core feature

prevision.io 6 Mar 29, 2022
Signed distance functions + Rust (CPU & GPU) = ❤️❤️

sdf-playground Signed distance functions + Rust (CPU & GPU) = ❤️❤️ Platforms: Windows, Mac & Linux. About sdf-playground is a demo showcasing how you

Patryk Wychowaniec 5 Nov 16, 2023
Rust based Cross-GPU Machine Learning

HAL : Hyper Adaptive Learning Rust based Cross-GPU Machine Learning. Why Rust? This project is for those that miss strongly typed compiled languages.

Jason Ramapuram 83 Dec 20, 2022
A Rust machine learning framework.

Linfa linfa (Italian) / sap (English): The vital circulating fluid of a plant. linfa aims to provide a comprehensive toolkit to build Machine Learning

Rust-ML 2.2k Jan 2, 2023
Xaynet represents an agnostic Federated Machine Learning framework to build privacy-preserving AI applications.

xaynet Xaynet: Train on the Edge with Federated Learning Want a framework that supports federated learning on the edge, in desktop browsers, integrate

XayNet 196 Dec 22, 2022
Tangram is an automated machine learning framework designed for programmers.

Tangram Tangram is an automated machine learning framework designed for programmers. Run tangram train to train a model from a CSV file on the command

Tangram 1.4k Dec 30, 2022
A Machine Learning Framework for High Performance written in Rust

polarlight polarlight is a machine learning framework for high performance written in Rust. Key Features TBA Quick Start TBA How To Contribute Contrib

Chris Ohk 25 Aug 23, 2022
Machine learning framework for building object trackers and similarity search engines

Similari Similari is a framework that helps build sophisticated tracking systems. The most frequently met operations that can be efficiently implement

In-Sight 71 Dec 28, 2022
A Framework for Production-Ready Continuous Machine Learning

CML "Domain generalization is dead, Continuous Machine Learning lives forever." —— an iKun CML is a framework for production-ready continuous machine

Yu Sun 3 Aug 1, 2023
A real-time implementation of "Ray Tracing in One Weekend" using nannou and rust-gpu.

Real-time Ray Tracing with nannou & rust-gpu An attempt at a real-time implementation of "Ray Tracing in One Weekend" by Peter Shirley. This was a per

null 89 Dec 23, 2022
Ecosystem of libraries and tools for writing and executing extremely fast GPU code fully in Rust.

Ecosystem of libraries and tools for writing and executing extremely fast GPU code fully in Rust.

Riccardo D'Ambrosio 2.1k Jan 5, 2023
Ecosystem of libraries and tools for writing and executing fast GPU code fully in Rust.

The Rust CUDA Project An ecosystem of libraries and tools for writing and executing extremely fast GPU code fully in Rust Guide | Getting Started | Fe

Rust GPU 2.1k Dec 30, 2022
How to: Run Rust code on your NVIDIA GPU

Status This documentation about an unstable feature is UNMAINTAINED and was written over a year ago. Things may have drastically changed since then; r

null 343 Dec 22, 2022
🐉 Making Rust a first-class language and ecosystem for GPU shaders 🚧

?? rust-gpu Rust as a first-class language and ecosystem for GPU graphics & compute shaders Current Status ?? Note: This project is still heavily in d

Embark 5.5k Jan 9, 2023
A Demo server serving Bert through ONNX with GPU written in Rust with <3

Demo BERT ONNX server written in rust This demo showcase the use of onnxruntime-rs on BERT with a GPU on CUDA 11 served by actix-web and tokenized wit

Xavier Tao 28 Jan 1, 2023
Wonnx - a GPU-accelerated ONNX inference run-time written 100% in Rust, ready for the web

Wonnx is a GPU-accelerated ONNX inference run-time written 100% in Rust, ready for the web. Supported Platforms (enabled by wgpu) API Windows Linux &

WebONNX 354 Jan 6, 2023
A gpu accelerated (optional) neural network Rust crate.

Intricate A GPU accelerated library that creates/trains/runs neural networks in pure safe Rust code. Architechture overview Intricate has a layout ver

Gabriel Miranda 11 Dec 26, 2022
A repo for learning how to parallelize computations in the GPU using Apple's Metal, in Rust.

Metal playground in rust Made for learning how to parallelize computations in the GPU using Apple's Metal, in Rust, via the metal crate. Overview The

Lambdaclass 5 Feb 20, 2023
LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!

LLaMa 7b in rust This repo contains the popular LLaMa 7b language model, fully implemented in the rust programming language! Uses dfdx tensors and CUD

Corey Lowman 16 May 8, 2023