A simple neural net implementation.

Overview

PROPHET - Neural Network Library

Linux Windows Codecov Coveralls Docs Crates.io
travis appveyor codecov coveralls docs crates.io

A simple neural net implementation written in Rust with a focus on cache-efficiency and sequential performance.

Currently only supports supervised learning with fully connected layers.

How to use

The preferred way to receive prophet is via cargo or github.

Compile prophet with

cargo build

Run the test suite with

cargo test --release

Note: It is recommended to use --release for testing since optimizations are insanely effective for prophet.

For additional information while running some long tests use

cargo test --release --verbose -- --nocapture

Run performance test with

cargo bench --features benches

Planned Features

  • Convolutional Layers: Foundations have been layed out already!
  • GPGPU Support by Vulkano
  • Even more flexible learning methods

License

Licensed under either of

at your option.

Dual licence: badge badge

Release Notes (YYYY/MM/DD)

0.4.2 (2017/10/13)

  • Relicensed the library under the dual license model where the user can choose between MIT or APACHE version 2.0.
  • Improved performance of learning algorithms by up to 27%*. (*Tested on my local machine.)
  • Updated ndarray from 0.10.10 to 0.10.11 and itertools from 0.6.5 to 0.7.0.
  • Relaxed dependency version constraints for rand, num, log and ndarray.
  • Usability: Added a HOW TO USE section to the README.
  • Dev
    • Added some unit tests for NeuralNet components for improved stability and maintainability.

0.4.1 (2017/08/27)

  • Fixed long-standing undeterministic bug.
  • Reverted ChaChaRng usage in NeuralLayer::random - it is much faster and ChaChaRng's safety is not needed.

0.4.0 (2017/08/09)

  • Updated ndarray dependency version from 0.9 to 0.10
  • Updated serde dependency version from 0.9 to 1.0
  • Enabled serde feature by default.
  • NeuralLayer::random now uses ChaChaRng internally instead of weak_rng
  • Devel:
    • travisCI now using new trusty environment
    • travisCI now uploads code coverage to coveralls and codecov.io
    • travisCI no longer requires sudo
Comments
  • Applied `cargo fmt` with the `.rustfmt.toml` from master branch.

    Applied `cargo fmt` with the `.rustfmt.toml` from master branch.

    Here is the cargo fmt in isolation.

    I just noticed there was a .rustfmt.toml on master, but not on next for some reason which got me confused initially. I moved it over, disabled write_mode which wasn't recognized anymore and applied it.

    I don't have any strong feelings how the content should be formatted, I just think that some formatting should be applied as otherwise it's hard to match that manually.

    opened by ralfbiedert 9
  • Some updates.

    Some updates.

    I was looking for a small, portable neural net library and found your project.

    Right now I'm only toying around and try to understand how things work. In that process I upgraded the project to Rust 2018 and applied various clippy and warning fixes, as well as cargo fmt.

    I was also curious if you accept PRs, and if you consider resuming development (incl. publishing new releases).

    opened by ralfbiedert 8
  • Refactor layer sizes to be of equal sizes

    Refactor layer sizes to be of equal sizes

    The entire neural network concrete layers were designed to be very memory efficient which resulted in a design with assymetric layer sizes (because of the way bias neurons are represented). This has led to several issues and implementation difficulties preventing the usage of more efficiently implemented lower-level algorithms provided by ndarray and other libraries. This refactoring of layer sizes also helps with implementing the new layer and topology layer types to path the way for Convolutional Layers in the end.

    enhancement 
    opened by Robbepop 1
  • Redesign topology builder

    Redesign topology builder

    The current topology builder is not flexible enough to use it for building up complex neural networks involving different layer types such as Convolutonal Layers.

    The new topology build infrastructure should enforce a mirroring between abstract topology layers and concrete neural network layers.

    Some thoughts on this matter has been made, results so far are:

    • There could be a need of some kind of InputLayer in topology structure to represent the initial layer.
    • Fully connected layers could be renamed to DenseLayers which is shorter.
    • There is a need for a generic ActivationLayer that is only responsible for serving the activation function. For identity activation functions this has the effect of simply leaving the activation layer away.
    • At first DenseLayer and ActivationLayer are fit to replace the current system. However, this structure makes it possible to later add layer types for convolutional layer computation to the set such as 2-dimensional PoolingLayer and ConvolutionLayer.
    • For interoperability between convolutional layers (2D) and normal layers (1D) there needs to be a kind of conversion between the 2-dimensional layers and the 1-dimensional layers.
    enhancement 
    opened by Robbepop 1
  • Fix undeterministic test failure on windows (Appveyor)

    Fix undeterministic test failure on windows (Appveyor)

    Since many versions appveyor randomly and undeterministically fails to build and test Prophet. Especially the integration tests are causing trouble. Things are running fine on travisCI though.

    For more information, visit: https://users.rust-lang.org/t/windows-undeterministic-test-timeout/12284

    bug 
    opened by Robbepop 1
  • Create utility abstractions to better distribute invariants

    Create utility abstractions to better distribute invariants

    Create utility abstractions that help to provide guarantees for invariants throughout the execution of the program. Also testability should be significantly improved by partitioning the program complexity into smaller tiles.

    This includes but is not limited to following utility components:

    • [x] WeightsMatrix & DeltaWeightsMatrix: Abstraction built on top of Array2<f32> to more semantically provide an interface for weights matrices mainly used by FullyConnectedLayer.
    • [x] SignalBuffer: Abstraction to model signals instead of using raw Array1<f32>.
    • [x] ErrorSignalBuffer: Same as with SignalBuffer but specialized for gradients in gradient descent computation.

    Certain layer kinds:

    • [x] FullyConnectedLayer: Abstracts mechanics of fully connected (dense) layers.
    • [x] ActivationLayer: A layer for activation application.
    • [x] ContainerLayer: A layer that contains other layers in a strictly sequential order.
    • [x] Layer, including variants for FullyConnectedLayer and ActivationLayer

    Core functionality & utilities:

    • [x] NeuralNet: Implements the concrete interface for neural networks and contains layers that describe its computation.

    Also model some traits that each stand uniquely for an important operation and behaviour shared by those abstractions:

    • [x] ProcessInputSignal: For internal (layer-based) feed forward operations.
    • [x] HasOutputSignal: For layers with output signals. In theory every layer requires this.
    • [x] CalculateOutputErrorSignal: For layers that work with gradient descent learning.
    • [x] HasErrorSignal: See above ...
    • [x] PropagateErrorSignal: See above ...
    • [x] ApplyErrorSignalCorrection: See above ...
    • [x] SizedLayer: Interface for working with input and output size of layers.
    enhancement 
    opened by Robbepop 0
  • Make 0.5 master, and 0.4 a branch for better visibility.

    Make 0.5 master, and 0.4 a branch for better visibility.

    Right now the project looks dead on Github since there wasn't any activity on master for ~1 year, for most code parts even ~2 years.

    That makes it hard to attract more developers (or even to explain to your colleagues that the project is not totally abandoned ...)

    I think it would make sense to make next the new master for better visibility.

    opened by ralfbiedert 2
  • Write unittests for ...

    Write unittests for ...

    Write remaining unittests for the new (and old) components of this library.

    Missing unittests are:

    • utils.rs:
      • [x] LearnRate
      • [x] LearnMomentum
    • nn.rs:
      • [ ] NeuralNet
    • topology_v4.rs:
      • [x] FullyConnectedLayer
      • [x] ActivationLayer
      • [x] AnyLayer
      • [x] LayerSize
      • [x] Topology
      • [x] InitializingTopology
    • layer:
      • [ ] ActivationLayer
      • [ ] FullyConnectedLayer
      • [ ] ContainerLayer
      • [ ] AnyLayer
        • utils module:
          • [x] buffer_base module
          • [x] matrix_base module
    • trainer:
      • utils module:
        • [x] MeanSquaredError
      • mentor module:
        • [x] Context
        • [ ] Mentor
        • [ ] InitializingMentor
    opened by Robbepop 0
  • Less panics, more Results

    Less panics, more Results

    Currently Prophet handles too many (potential) errors with panics instead of proper Rust-style error handling via Result return type and proper error kinds etc. This also leads to less testable code and prevents implementing better error reporting strategies and should be addressed.

    Branch next already implements (decent) error kinds and a rust-style Error struct with typical interface impls. The next step is to adjust the rest of the code base to this format away from asserts.

    enhancement 
    opened by Robbepop 0
Owner
Robin Freyler
🦀Rustacean with C/C++ origins - Enthusiastic about WebAssembly, compilers & SAT solving.
Robin Freyler
Rust implementation of real-coded GA for solving optimization problems and training of neural networks

revonet Rust implementation of real-coded genetic algorithm for solving optimization problems and training of neural networks. The latter is also know

Yury Tsoy 19 Aug 11, 2022
A neural network, and tensor dynamic automatic differentiation implementation for Rust.

Corgi A neural network, and tensor dynamic automatic differentiation implementation for Rust. BLAS The BLAS feature can be enabled, and requires CBLAS

Patrick Song 20 Nov 7, 2022
n2 is a library implementation of a feedforward, backpropagation artificial neural network.

n2 is a library implementation of a feedforward, backpropagation artificial neural network. Usage Add the following to the [dependencies] section o

Søren Mortensen 0 Feb 21, 2021
SlintDotnet is a C# bindings project to enable developers to use Slint UI with .NET C#

SlintDotnet (Alpha) Slint is a UI toolkit that supports different programming languages. SlintDotnet is the integration with .NET C#. ⚠️ This is exper

Matheus Castello 9 Oct 2, 2023
Simple neural network library for classification written in Rust.

Cogent A note I continue working on GPU stuff, I've made some interesting things there, but ultimately it made me realise this is far too monumental a

Jonathan Woollett-Light 41 Dec 25, 2022
Simple Neural Network on rust

Simple Artificial Neural Network A crate that implements simple usage of dense neural networks. Instalation Add this to your dependencies on Cargo.tom

null 6 Jul 1, 2022
A neural network crate

RustNN An easy to use neural network library written in Rust. Crate Documentation Description RustNN is a feedforward neural network library. The libr

Jack Montgomery 316 Dec 29, 2022
Neural networks in Rust

deeplearn-rs Deep learning in Rust! This is my first shot at this. It's mostly just a proof of concept right now. The API will change. Status We have

Theodore DeRego 199 Oct 23, 2022
Rust wrapper for the Fast Artificial Neural Network library

fann-rs Rust wrapper for the Fast Artificial Neural Network (FANN) library. This crate provides a safe interface to FANN on top of the low-level bindi

Andreas Fackler 12 Jul 17, 2022
Compile-time creation of neural networks with Rust

GAMMA Compile-time creation of neural networks with Rust Description This is for now just a showcase project of what can be done with const generics i

Aitor Ruano 354 Jan 1, 2023
Compile-time creation of neural networks

Mushin: Compile-time creation of neural networks Mushin is a Japanese term used in martial arts that refers to the state of mind obtained by practice.

Aitor Ruano 354 Jan 1, 2023
Tensors and dynamic neural networks in pure Rust.

Neuronika is a machine learning framework written in pure Rust, built with a focus on ease of use, fast prototyping and performance. Dynamic neural ne

Neuronika 851 Jan 3, 2023
Machine learning Neural Network in Rust

vinyana vinyana - stands for mind in pali language. Goal To implement a simple Neural Network Library in order to understand the maths behind it. This

Alexandru Olaru 3 Dec 26, 2022
SelfOrgMap 5 Nov 4, 2020
Neural Networks in Rust, without backpropagation. WIP

Deep Thought As of right now, this crate is far from a usable state. This crate implements feedforward-neural Networks in rust. Unlike the vast majori

null 5 Apr 10, 2022
Toy library for neural networks in Rust using Vulkan compute shaders

descent Toy library for neural networks in Rust using Vulkan compute shaders. Features Multi-dimensional arrays backed by Vulkan device memory Use Rus

Simon Brown 71 Dec 16, 2022
A light wheight Neural Network library with a focus on ease of use and speed.

Smarty Pants This goal of this library is to: Produce NeuralNetworks that will always give the same result when given the same input. Provide methods

Coding Wizard 3 Mar 7, 2022
A neural network model that can approximate any non-linear function by using the random search algorithm for the optimization of the loss function.

random_search A neural network model that can approximate any non-linear function by using the random search algorithm for the optimization of the los

ph04 2 Apr 1, 2022
A gpu accelerated (optional) neural network Rust crate.

Intricate A GPU accelerated library that creates/trains/runs neural networks in pure safe Rust code. Architechture overview Intricate has a layout ver

Gabriel Miranda 11 Dec 26, 2022