Rust implementation of real-coded GA for solving optimization problems and training of neural networks

Overview

revonet

Rust implementation of real-coded genetic algorithm for solving optimization problems and training of neural networks. The latter is also known as neuroevolution.

Features:

  • real-coded Evolutionary Algorithm
  • NeuroEvolutionary tuning of weights of Neural Network with fixed structure
  • supports several feed-forward architectures

https://github.com/yurytsoy/revonet/blob/master/imgs/nn_arch.png

  • automatically computes statistics for single and multiple runs for EA and NE
  • EA settings and results can be saved to json
  • allows defining user-specified objective functions for EA and NE (see examples below)

Examples

Real-coded genetic algorithm

let pop_size = 20u32;       // population size.
let problem_dim = 10u32;    // number of optimization parameters.

let problem = RosenbrockProblem{};  // objective function.
let gen_count = 10u32;      // generations number.
let settings = GASettings::new(pop_size, gen_count, problem_dim);
let mut ga: GA<RosenbrockProblem> = GA::new(settings, &problem);   // init GA.
let res = ga.run(settings).expect("Error during GA run");  // run and fetch the results.

// get and print results of the current run.
println!("\n\nGA results: {:?}", res);

// make multiple runs and get combined results.
let res = ga.run_multiple(settings, 10 as u32).expect("Error during multiple GA runs");
println!("\n\nResults of multple GA runs: {:?}", res);

Run evolution of NN weights to solve regression problem

let (pop_size, gen_count, param_count) = (20, 20, 100); // gene_count does not matter here as NN structure is defined by a problem.
let settings = EASettings::new(pop_size, gen_count, param_count);
let problem = SymbolicRegressionProblem::new_f();

let mut ne: NE<SymbolicRegressionProblem> = NE::new(&problem);
let res = ne.run(settings).expect("Error: NE result is empty");
println!("result: {:?}", res);
println!("\nbest individual: {:?}", res.best);

Creating multilayered neural network with 2 hidden layers with sigmoid activation and with linear output nodes.

const INPUT_SIZE: usize = 20;
const OUTPUT_SIZE: usize = 2;

let mut rng = rand::thread_rng();   // needed for weights initialization when NN is built.
let mut net: MultilayeredNetwork = MultilayeredNetwork::new(INPUT_SIZE, OUTPUT_SIZE);
net.add_hidden_layer(30 as usize, ActivationFunctionType::Sigmoid)
     .add_hidden_layer(20 as usize, ActivationFunctionType::Sigmoid)
     .build(&mut rng, NeuralArchitecture::Multilayered);       // `build` finishes creation of neural network.

let (ws, bs) = net.get_weights();   // `ws` and `bs` are `Vec` arrays containing weights and biases for each layer.
assert!(ws.len() == 3);		// number of elements equals to number of hidden layers + 1 output layer
assert!(bs.len() == 3);		// number of elements equals to number of hidden layers + 1 output layer

Creating custom optimization problem for GA

// Dummy problem returning random fitness.
pub struct DummyProblem;
impl Problem for DummyProblem {
    // Function to evaluate a specific individual.
    fn compute<T: Individual>(&self, ind: &mut T) -> f32 {
        // use `to_vec` to get real-coded representation of an individual.
        let v = ind.to_vec().unwrap();

        let mut rng: StdRng = StdRng::from_seed(&[0]);
        rng.gen::<f32>()
    }
}

Creating custom problem for NN evolution

// Dummy problem returning random fitness.
struct RandomNEProblem {}
impl RandomNEProblem {
    fn new() -> RandomNEProblem {
        RandomNEProblem{}
    }
}
impl NeuroProblem for RandomNEProblem {
    // return number of NN inputs.
    fn get_inputs_num(&self) -> usize {1}
    // return number of NN outputs.
    fn get_outputs_num(&self) -> usize {1}
    // return NN with random weights and a fixed structure. For now the structure should be the same all the time to make sure that crossover is possible. Likely to change in the future.
    fn get_default_net(&self) -> MultilayeredNetwork {
        let mut rng = rand::thread_rng();
        let mut net: MultilayeredNetwork = MultilayeredNetwork::new(self.get_inputs_num(), self.get_outputs_num());
        net.add_hidden_layer(5 as usize, ActivationFunctionType::Sigmoid)
            .build(&mut rng, NeuralArchitecture::Multilayered);
        net
    }
    // Function to evaluate performance of a given NN.
    fn compute_with_net<T: NeuralNetwork>(&self, nn: &mut T) -> f32 {
        let mut rng: StdRng = StdRng::from_seed(&[0]);
        let mut input = (0..self.get_inputs_num())
                            .map(|_| rng.gen::<f32>())
                            .collect::<Vec<f32>>();
        // compute NN output using random input.
        let mut output = nn.compute(&input);
        output[0]
    }
}

You might also like...
Embedded Rust on Espressif training material.

Embedded Rust Trainings for Espressif This repository contains Training Material for learning to use Embedded Rust with the Espressif ESP32-C3. We sug

High performance distributed framework for training deep learning recommendation models based on PyTorch.
High performance distributed framework for training deep learning recommendation models based on PyTorch.

PERSIA (Parallel rEcommendation tRaining System with hybrId Acceleration) is developed by AI platform@Kuaishou Technology, collaborating with ETH. It

A machine learning library for supervised training of parametrized models

Vikos Vikos is a library for supervised training of parameterized, regression, and classification models Design Goals Model representations, cost func

Deduplicating Training Data Makes Language Models Better

Deduplicating Training Data Makes Language Models Better This repository contains code to deduplicate language model datasets as descrbed in the paper

TP - Optimization of the energy consumption of a Matrix Multiplication

Optimization de la consommation: Multiplication de matrices Nous allons travailler sur la multiplication de matrices denses. C’est un algorithme class

A pure, low-level tensor program representation enabling tensor program optimization via program rewriting

Glenside is a pure, low-level tensor program representation which enables tensor program optimization via program rewriting, using rewriting frameworks such as the egg equality saturation library.

A simple neural net implementation.

PROPHET - Neural Network Library Linux Windows Codecov Coveralls Docs Crates.io A simple neural net implementation written in Rust with a focus on cac

n2 is a library implementation of a feedforward, backpropagation artificial neural network.

n2 is a library implementation of a feedforward, backpropagation artificial neural network. Usage Add the following to the [dependencies] section o

🔭 interactively explore `onnx` networks in your CLI.

nnli Interactively explore onnx networks in your CLI. Get nnli 🎉 From Cargo cargo install nnli From Github git clone https://github.com/drbh/nnli.git

Comments
  • Refactoring

    Refactoring

    Currently there is something wrong with the way EA, GA and NE are organized. Can not say exactly, but one of the problems is that some generics are defined locally (for functions in EA), while GA and NE define those generics per implementation. Overloading functions becomes a headache. Need to find a better way.

    v0.2.0 
    opened by yurytsoy 2
  • Improvement for GA infrastructure

    Improvement for GA infrastructure

    • [x] Add GA context
    • [x] Make reproducible by enabling seeded rng and passing rng in the context
    • [x] Implement EA trait
    • [x] Add saving/loading settings/results to json
    • [x] Add support for more crossover and mutation operators
    • [ ] Add speciation
    v0.1.0 
    opened by yurytsoy 2
  • Add possibility to create ANNs with skip connections

    Add possibility to create ANNs with skip connections

    Implement using function for trait Layer

    compute_with_bypass(inputs: &[f32], bypass: &[f32]) -> Vec<f32>
    

    the layer computes output using inputs and returns output vector concatenated with the bypass. Concatenation is performed right before the return.

    Examples of possible usage:

    • use ANN input signals as a bypass for every layer, except of output, with output of layer k as input for layer k+1 to implement skip connections between ANN inputs and every layer.
    • use layer output as a bypass for the next layer to propagate outputs of all layers forward throughout the network.
    • use first layer_size elements of the layer output as a bypass for the next layer to make layer outputs "jump" over the next layer
    enhancement 
    opened by yurytsoy 1
Owner
Yury Tsoy
Yury Tsoy
NEATeRS is a library for training a genetic neural net through reinforcement learning.

NEATeRS NEATeRS is a library for training a genetic neural net through reinforcement learning. It uses the NEAT algorithm developed by Ken Stanley whi

TecTrixer 3 Nov 28, 2022
A neural network model that can approximate any non-linear function by using the random search algorithm for the optimization of the loss function.

random_search A neural network model that can approximate any non-linear function by using the random search algorithm for the optimization of the los

ph04 2 Apr 1, 2022
Tensors and dynamic neural networks in pure Rust.

Neuronika is a machine learning framework written in pure Rust, built with a focus on ease of use, fast prototyping and performance. Dynamic neural ne

Neuronika 851 Jan 3, 2023
Neural networks in Rust

deeplearn-rs Deep learning in Rust! This is my first shot at this. It's mostly just a proof of concept right now. The API will change. Status We have

Theodore DeRego 199 Oct 23, 2022
Compile-time creation of neural networks with Rust

GAMMA Compile-time creation of neural networks with Rust Description This is for now just a showcase project of what can be done with const generics i

Aitor Ruano 354 Jan 1, 2023
Neural Networks in Rust, without backpropagation. WIP

Deep Thought As of right now, this crate is far from a usable state. This crate implements feedforward-neural Networks in rust. Unlike the vast majori

null 5 Apr 10, 2022
Toy library for neural networks in Rust using Vulkan compute shaders

descent Toy library for neural networks in Rust using Vulkan compute shaders. Features Multi-dimensional arrays backed by Vulkan device memory Use Rus

Simon Brown 71 Dec 16, 2022
Compile-time creation of neural networks

Mushin: Compile-time creation of neural networks Mushin is a Japanese term used in martial arts that refers to the state of mind obtained by practice.

Aitor Ruano 354 Jan 1, 2023
A small game about solving a mystery aboard a train... if there even is one

Train Mystery A small game about solving a mystery aboard a train... if there even is one. ?? Jeu d'enquête gagnant du Palm'Hackaton 2023. ?? A propos

Aloïs RAUTUREAU 4 May 3, 2023
A neural network, and tensor dynamic automatic differentiation implementation for Rust.

Corgi A neural network, and tensor dynamic automatic differentiation implementation for Rust. BLAS The BLAS feature can be enabled, and requires CBLAS

Patrick Song 20 Nov 7, 2022