Rust implementation of real-coded GA for solving optimization problems and training of neural networks

Overview

revonet

Rust implementation of real-coded genetic algorithm for solving optimization problems and training of neural networks. The latter is also known as neuroevolution.

Features:

  • real-coded Evolutionary Algorithm
  • NeuroEvolutionary tuning of weights of Neural Network with fixed structure
  • supports several feed-forward architectures

https://github.com/yurytsoy/revonet/blob/master/imgs/nn_arch.png

  • automatically computes statistics for single and multiple runs for EA and NE
  • EA settings and results can be saved to json
  • allows defining user-specified objective functions for EA and NE (see examples below)

Examples

Real-coded genetic algorithm

let pop_size = 20u32;       // population size.
let problem_dim = 10u32;    // number of optimization parameters.

let problem = RosenbrockProblem{};  // objective function.
let gen_count = 10u32;      // generations number.
let settings = GASettings::new(pop_size, gen_count, problem_dim);
let mut ga: GA<RosenbrockProblem> = GA::new(settings, &problem);   // init GA.
let res = ga.run(settings).expect("Error during GA run");  // run and fetch the results.

// get and print results of the current run.
println!("\n\nGA results: {:?}", res);

// make multiple runs and get combined results.
let res = ga.run_multiple(settings, 10 as u32).expect("Error during multiple GA runs");
println!("\n\nResults of multple GA runs: {:?}", res);

Run evolution of NN weights to solve regression problem

let (pop_size, gen_count, param_count) = (20, 20, 100); // gene_count does not matter here as NN structure is defined by a problem.
let settings = EASettings::new(pop_size, gen_count, param_count);
let problem = SymbolicRegressionProblem::new_f();

let mut ne: NE<SymbolicRegressionProblem> = NE::new(&problem);
let res = ne.run(settings).expect("Error: NE result is empty");
println!("result: {:?}", res);
println!("\nbest individual: {:?}", res.best);

Creating multilayered neural network with 2 hidden layers with sigmoid activation and with linear output nodes.

const INPUT_SIZE: usize = 20;
const OUTPUT_SIZE: usize = 2;

let mut rng = rand::thread_rng();   // needed for weights initialization when NN is built.
let mut net: MultilayeredNetwork = MultilayeredNetwork::new(INPUT_SIZE, OUTPUT_SIZE);
net.add_hidden_layer(30 as usize, ActivationFunctionType::Sigmoid)
     .add_hidden_layer(20 as usize, ActivationFunctionType::Sigmoid)
     .build(&mut rng, NeuralArchitecture::Multilayered);       // `build` finishes creation of neural network.

let (ws, bs) = net.get_weights();   // `ws` and `bs` are `Vec` arrays containing weights and biases for each layer.
assert!(ws.len() == 3);		// number of elements equals to number of hidden layers + 1 output layer
assert!(bs.len() == 3);		// number of elements equals to number of hidden layers + 1 output layer

Creating custom optimization problem for GA

// Dummy problem returning random fitness.
pub struct DummyProblem;
impl Problem for DummyProblem {
    // Function to evaluate a specific individual.
    fn compute<T: Individual>(&self, ind: &mut T) -> f32 {
        // use `to_vec` to get real-coded representation of an individual.
        let v = ind.to_vec().unwrap();

        let mut rng: StdRng = StdRng::from_seed(&[0]);
        rng.gen::<f32>()
    }
}

Creating custom problem for NN evolution

// Dummy problem returning random fitness.
struct RandomNEProblem {}
impl RandomNEProblem {
    fn new() -> RandomNEProblem {
        RandomNEProblem{}
    }
}
impl NeuroProblem for RandomNEProblem {
    // return number of NN inputs.
    fn get_inputs_num(&self) -> usize {1}
    // return number of NN outputs.
    fn get_outputs_num(&self) -> usize {1}
    // return NN with random weights and a fixed structure. For now the structure should be the same all the time to make sure that crossover is possible. Likely to change in the future.
    fn get_default_net(&self) -> MultilayeredNetwork {
        let mut rng = rand::thread_rng();
        let mut net: MultilayeredNetwork = MultilayeredNetwork::new(self.get_inputs_num(), self.get_outputs_num());
        net.add_hidden_layer(5 as usize, ActivationFunctionType::Sigmoid)
            .build(&mut rng, NeuralArchitecture::Multilayered);
        net
    }
    // Function to evaluate performance of a given NN.
    fn compute_with_net<T: NeuralNetwork>(&self, nn: &mut T) -> f32 {
        let mut rng: StdRng = StdRng::from_seed(&[0]);
        let mut input = (0..self.get_inputs_num())
                            .map(|_| rng.gen::<f32>())
                            .collect::<Vec<f32>>();
        // compute NN output using random input.
        let mut output = nn.compute(&input);
        output[0]
    }
}

You might also like...
A SIMD-accelerated Adler-32 rolling hash algorithm implementation.

simd-adler32 A SIMD-accelerated Adler-32 rolling hash algorithm implementation. Features No dependencies Support no_std (with default-features = false

An implementation of the A-Star pathfinding algorithm tailored for traversing a bespoke collection of weighted hexagons.
An implementation of the A-Star pathfinding algorithm tailored for traversing a bespoke collection of weighted hexagons.

An implementation of the A-Star pathfinding algorithm tailored for traversing a bespoke collection of weighted hexagons. It's intended to calculate the most optimal path to a target hexagon where you are traversing from the centre of one hexagon to the next along a line orthogonal to a hexagon edge

A simple implementation of the astar pathfinding algorithm from red blob games
A simple implementation of the astar pathfinding algorithm from red blob games

A simple implementation of the astar pathfinding algorithm from red blob games. In order to use the pathfinder you must have a path map for it to navi

nsga is an opinionated implementation of the NSGA-II (Non-dominated Sorting Genetic Algorithm)

nsga nsga is an opinionated implementation of the NSGA-II (Non-dominated Sorting Genetic Algorithm), a multi-objective genetic optimization algorithm.

ANISE provides an open-source and open-governed library and algorithmic specification for most computations for astrodynamics

ANISE provides an open-source and open-governed library and algorithmic specification for most computations for astrodynamics. It is heavily inspired by NAIF SPICE, and may be considered as an open-source modern rewrite of SPICE.

Fast, parallel, extensible and adaptable genetic algorithms framework written in Rust

oxigen Oxigen is a parallel genetic algorithm framework implemented in Rust. The name comes from the merge of OXIdación (Rust translated to Spanish) a

A browser app that evolves vehicles using genetic algorithms, written in Rust and Bevy
A browser app that evolves vehicles using genetic algorithms, written in Rust and Bevy

Vehicle Evolver Deluxe This is a simulation that uses AI (to be specific: genetic algorithms) to try to build better and better vehicles. The vehicles

Common data structures and algorithms for competitive programming in Rust
Common data structures and algorithms for competitive programming in Rust

algorithm-rs algorithm-rs is common data structures and algorithms for competitive programming in Rust. Contents TBA How To Contribute Contributions a

zine/book about bitmap drawing algorithms and math with code examples in Rust
zine/book about bitmap drawing algorithms and math with code examples in Rust

A Bitmapper's Companion - zine/book about bitmap drawing algorithms and math with code examples in Rust A small zine/book written in LaTeX. In progres

Comments
  • Refactoring

    Refactoring

    Currently there is something wrong with the way EA, GA and NE are organized. Can not say exactly, but one of the problems is that some generics are defined locally (for functions in EA), while GA and NE define those generics per implementation. Overloading functions becomes a headache. Need to find a better way.

    v0.2.0 
    opened by yurytsoy 2
  • Improvement for GA infrastructure

    Improvement for GA infrastructure

    • [x] Add GA context
    • [x] Make reproducible by enabling seeded rng and passing rng in the context
    • [x] Implement EA trait
    • [x] Add saving/loading settings/results to json
    • [x] Add support for more crossover and mutation operators
    • [ ] Add speciation
    v0.1.0 
    opened by yurytsoy 2
  • Add possibility to create ANNs with skip connections

    Add possibility to create ANNs with skip connections

    Implement using function for trait Layer

    compute_with_bypass(inputs: &[f32], bypass: &[f32]) -> Vec<f32>
    

    the layer computes output using inputs and returns output vector concatenated with the bypass. Concatenation is performed right before the return.

    Examples of possible usage:

    • use ANN input signals as a bypass for every layer, except of output, with output of layer k as input for layer k+1 to implement skip connections between ANN inputs and every layer.
    • use layer output as a bypass for the next layer to propagate outputs of all layers forward throughout the network.
    • use first layer_size elements of the layer output as a bypass for the next layer to make layer outputs "jump" over the next layer
    enhancement 
    opened by yurytsoy 1
Owner
Yury Tsoy
Yury Tsoy
Linear Programming for Rust, with an user-friendly API. This crate allows modeling LP problems, and let's you solve them with various solvers.

good_lp A Linear Programming modeler that is easy to use, performant with large problems, and well-typed. use good_lp::{variables, variable, coin_cbc,

Rust Operations Research 101 Dec 27, 2022
A Rust implementation the HOTP and TOTP Algorithms

xotp A Rust implementation the HOTP and TOTP Algorithms. HOTP was implemented in accordance with RFC4226 TOTP was implemented in accordance with RFC62

Tejas Mehta 3 Jan 21, 2022
Raft implementation in Rust

rsraft Raft implementation in Rust. The aim of this project is implementing the Raft Consensus Algorithm as described in the paper, with the goal of f

Lauro Caetano 42 Dec 24, 2022
Rust implementation of AstroBWT Proof-Of-Work algorithm

AstroBWT AstroBWT is a proof-of-work (PoW) algorithm based on Burrows-Wheeler transform (BWT). Developed and used by the DERO Project for Mobile (arm)

null 6 Mar 7, 2022
Stalin Binary Search, Rust implementation

Stalin Binary Search Idea is based on Stalin Sort It's alike binary search but any checking element which is not target one is eliminated. Complexity

null 7 Dec 12, 2022
Pure Rust implementation of the Double Ratchet algorithm

Double Ratchet A pure Rust implementation of the Double Ratchet, as specified by Trevor Perrin and Moxie Marlinspike. The Double Ratchet allows two us

Sebastian 52 Nov 25, 2022
A rust implementation of Alexey Akhunov's multiproof algorithm

multiproof.rs A rust implementation of Alexey Akhunov's multiproof algorithm. At the time of creation, multiproof is still a work in progress and this

Guillaume Ballet 30 Aug 24, 2022
A Rust implementation of an Oware AI

Rust implementation of a minimax algorithm. The game is a custom version of the Oware. Using Alphabeta pruning, move ordering, and a tiny iterative deepening.

Jean Philippe Carlens 2 Jun 14, 2022
A Rust implementation of the AGCWD algorithm

AGCWD is described in the paper "Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution".

Takeru Ohta 2 Jan 17, 2022