nsga is an opinionated implementation of the NSGA-II (Non-dominated Sorting Genetic Algorithm)

Overview

nsga

nsga is an opinionated implementation of the NSGA-II (Non-dominated Sorting Genetic Algorithm), a multi-objective genetic optimization algorithm.

The focus for this implementation is on practical applicability, not necessarily just for optimizing pure mathematical functions.

Example

Let's define an example problem:

Given an array of integers and a value, find indices of the array elements
that sum up to the given value.

For example, given an array:

let a = vec![1, 5, 8, 0, 6, 4];

and a value of 19, the solution could be:

vec![1, 0, 1, 0, 1, 1]; // 1 + 8 + 6 + 4 = 19

or

vec![0, 1, 1, 0, 1, 0]; // 5 + 8 + 6 = 19

Of course, for such a small input we wouldn't need a fancy optimizer, but when dealing with thousands or millions of elements, the task becomes somewhat challenging.

Solution candidate

The problem is represented by an implementation of the Solution trait. Let's define a base structure for our solution candidate:

#[derive(Clone, Debug)]
struct Candidate {
  indices: Vec<isize>,
}

Since the solution to our problem is an array of indices, we can simply have a Vec in our struct.

Mutation

Mutation is an operation that changes the solution candidate. It is a way for the system to add diversity and escape local optima. Not unlike the role mutation plays in the evolution of the real biological systems.

Mutation is heavily problem-dependent and in our case we're just going to flip an index from 0 to 1 and vice versa with a certain probability.

fn mutate(&mut self) {
  let mut rng = thread_rng();

  for i in &mut self.indices {
    if rng.gen_ratio(MUTATION_ODDS.0, MUTATION_ODDS.1) {
      *i = if *i == 0 { 1 } else { 0 }
    }
  }
}

Crossover

Crossover is an operation that takes two parent candidates and mixes their "genes". In our implementation, we're going to split both parents in half and swap corresponding parts. I.e. given two parents:

let a = vec![1, 1, 1, 1, 1, 1];
let b = vec![0, 0, 0, 0, 0, 0];

after the crossover these would look like:

vec![1, 1, 1, 0, 0, 0];
vec![0, 0, 0, 1, 1, 1];
fn crossover(&mut self, other: &mut Self) {
  let mut a = &mut self.indices;
  let mut b = &mut other.indices;

   // Use `a` for the longer vector:
   if b.len() > a.len() {
     a = &mut other.indices;
     b = &mut self.indices;
   }

   let a_mid = a.len() / 2;
   let b_mid = b.len() / 2;

   let b_back = &mut b[b_mid..];
   let a_back = &mut a[a_mid..][..b_back.len()];

   a_back.swap_with_slice(b_back);

   let a_len = a_mid + a_back.len();
   b.extend(a.drain(a_len..));
}

Objective

In order to guide the optimizer, we need to implement the Objective trait. The only mandatory method is Objective::value which takes a solution candidate and returns its fitness value. The lower this value the closer a particular solution is to the ideal solution.

For our task we'd implement something like the following:

fn value(&self, candidate: &Candidate) -> f64 {
  let res: f64 = candidate
                 .indices
                 .iter()
                 .enumerate()
                 .map(|(i, rec)| if *rec == 1 { self.items[i] } else { 0. })
                 .sum();

  let diff = (self.goal - res).abs();
  if diff < 0. {
    f64::MAX
  } else {
    diff
  }
}

Basically, it computes the sum of all the values for which the the index bit was set in the solution and then computes the difference between the sum and the target value, we're looking for.

The closer the sum of the current solution is to the value we're looking for, the smaller will be the difference, and this is exactly what we need, since the optimizer always tries to find the function minimum.

Early termination

When in a particular objective the target value is known, the optimization process can be made significantly faster by not having to compute all the iteration steps.

For example, in our case, we know exactly the value we're looking for so we can terminate the search the moment we're close enough to the desired value.

fn good_enough(&self, val: f64) -> bool {
   val <= self.toleration
}

By tweaking the self.toleration value we can make the search as precise as we need.

Metadata

There's a set of additional meta-parameters we'd need to provide to the optimizer. We do this by implementing a Meta trait.

Meta::population_size

Population size is the size of the internal pool of candidates optimizer uses. The default value is 20 and in most cases, it can be left untouched.

Meta::crossover_odds

This method should return a probability of applying a crossover operation. It should generally be relatively high, around 50% or so.

Meta::mutation_odds

This method should return a probability of applying a mutation operation. It should generally be smaller than the crossover value, around 20-30% or so.

Meta::random_solution

A method to return a random solution candidate. In our case, we'll just return a vector of zeroes for the indices.

fn random_solution(&mut self) -> Candidate {
  let indices: Vec<isize> = (0..self.records_length).map(|_| 0).collect();

  Candidate { indices }
}

Meta::objectives

This method returns a vector of objectives to use in the optimization. In our case, it will be an instance of our SumObjective one:

fn objectives(&self) -> &Vec<Box<dyn Objective
   >> {
  
   vec![
    
   Box
   ::
   new(SumObjective {
      goal: 
   19.,
      items: 
   vec![
   1, 
   5, 
   8, 
   0, 
   6, 
   4],
      toleration: 
   0.0,
    }),
  ]
}
  

Meta::constraints

This method returns an optional vector of constraints to use in the optimization. We won't need constraints for our little example.

Multiple objectives

Now, being able to optimize for one objective is great, but NSGA-II is a multi-objective optimization algorithm, meaning that it can optimize for many objectives at the same time. And some of those may even conflict with each other! The details are outside the scope of this tutorial, feel free to read more about it on Wikipedia, if you'd like.

Remember, with our initial test vector:

let a = vec![1, 5, 8, 0, 6, 4];

we identified two solutions with a sum of 19:

let s1 = vec![1, 0, 1, 0, 1, 1]; // 1 + 8 + 6 + 4 = 19
let s2 = vec![0, 1, 1, 0, 1, 0]; // 5 + 8 + 6 = 19

Now, let's say in addition to finding a required sum, we'd also want to find the one with the smallest number of summands. So, for s1 above there would be four summands: 1, 8, 6 and 4, while s2 only has three: 5, 8 and 6, so we'd want our optimization to find the latter one.

All we need for this is to implement another objective, let's call it OnesObjective, because it's simply going to return the number of ones (set bits) in the solution:

pub struct OnesObjective {}

impl Objective 
   for 
   OnesObjective {
    
   fn 
   value(
   &
   self, candidate: 
   &Candidate) -> 
   f64 {
        candidate.indices.
   iter().
   filter(
   |i
   | 
   *
   *i 
   == 
   1).
   count() 
   as 
   f64
    }
}
  

And then add to our objectives method:

fn objectives(&self) -> &Vec<Box<dyn Objective
   >> {
    
   vec![
        
   Box
   ::
   new(SumObjective {
            goal: 
   19.,
            items: 
   vec![
   1, 
   5, 
   8, 
   0, 
   6, 
   4],
            toleration: 
   0.0,
        }),
        
   Box
   ::
   new(OnesObjective{}),
    ]
}
  

That's it!

For complete-code examples take a look at the crate tests:

You might also like...
An implementation of the A-Star pathfinding algorithm tailored for traversing a bespoke collection of weighted hexagons.
An implementation of the A-Star pathfinding algorithm tailored for traversing a bespoke collection of weighted hexagons.

An implementation of the A-Star pathfinding algorithm tailored for traversing a bespoke collection of weighted hexagons. It's intended to calculate the most optimal path to a target hexagon where you are traversing from the centre of one hexagon to the next along a line orthogonal to a hexagon edge

Pure Rust implementation of the Double Ratchet algorithm
Pure Rust implementation of the Double Ratchet algorithm

Double Ratchet A pure Rust implementation of the Double Ratchet, as specified by Trevor Perrin and Moxie Marlinspike. The Double Ratchet allows two us

A rust implementation of Alexey Akhunov's multiproof algorithm

multiproof.rs A rust implementation of Alexey Akhunov's multiproof algorithm. At the time of creation, multiproof is still a work in progress and this

A simple implementation of the astar pathfinding algorithm from red blob games
A simple implementation of the astar pathfinding algorithm from red blob games

A simple implementation of the astar pathfinding algorithm from red blob games. In order to use the pathfinder you must have a path map for it to navi

A Rust implementation of the AGCWD algorithm

AGCWD is described in the paper "Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution".

A Trig-less Line of Sight Algorithm in Two Dimensions

In many examples of 2D line-of-sight algorithms, expensive operations like trigonometry are used. Additionally, some methods have intentional inaccuracies in them for the sake of simplicity. Here, we give an algorithm which does not fudge the numbers, and uses only basic arithmetic: addition, subtraction, multiplication, and division. This is not intended to replace the existing algorithms, or even be more efficient in practice.

The labs of Raft consensus algorithm based on MadSim.

MadRaft The labs of Raft consensus algorithm based on MadSim. Some codes are derived from MIT 6.824 and PingCAP Talent Plan: Raft Lab. Thanks for thei

Maximum Relevance - Minimum redundancy feature selection algorithm

mrmr Maximum Relevance - Minimum redundancy feature selection algorithm implemented in Rust. Usage CLI app. It gets input from a csv dataset with head

hubpack is an algorithm for converting Rust values to bytes and back.

hubpack is an algorithm for converting Rust values to bytes and back. It was originally designed for encoding messages sent between embedded programs. It is designed for use with serde.

Owner
Max Kuznetsov
Max Kuznetsov
Genetic Algorithm library in Rust

RsGenetic Summary and Features RsGenetic is a framework for executing genetic algorithms in Rust. It is designed to have a simple but modular API. Exa

Mathieu De Coster 74 Dec 27, 2022
Example of a genetic algorithm in Rust and Python

Example of a genetic algorithm in Rust and Python Monkey typewriter Finding the phrase 'To be or not to be. That is the question.' Inspired by the exa

sotrh 2 Jan 27, 2022
Suite for automatically testing algorithm questions from the Polish Algorithm Olympiad.

oisuite Your number #1 tool to managing your algo questions! This software only works on UNIX-based operating systems (macOS, Linux, BSD, etc.) Projec

null 3 Nov 25, 2021
Fast, parallel, extensible and adaptable genetic algorithms framework written in Rust

oxigen Oxigen is a parallel genetic algorithm framework implemented in Rust. The name comes from the merge of OXIdación (Rust translated to Spanish) a

Martín Pozo 146 Dec 18, 2022
Radiate is a parallel genetic programming engine capable of evolving solutions to many problems as well as training learning algorithms.

Radiate Coming from Evolutionary Radiation. Evolutionary radiation is a rapid increase in the number of species with a common ancestor, characterized

kalivas 109 Dec 18, 2022
A browser app that evolves vehicles using genetic algorithms, written in Rust and Bevy

Vehicle Evolver Deluxe This is a simulation that uses AI (to be specific: genetic algorithms) to try to build better and better vehicles. The vehicles

null 95 Dec 26, 2022
Genetic Algorithms Library

genx genx provides modular building blocks to run simulations of optimization and search problems using Genetic Algorithms (GA). The vision for genx i

Lakshya Singh 29 Aug 9, 2022
A SIMD-accelerated Adler-32 rolling hash algorithm implementation.

simd-adler32 A SIMD-accelerated Adler-32 rolling hash algorithm implementation. Features No dependencies Support no_std (with default-features = false

Marvin Countryman 23 Dec 19, 2022
Rust implementation of AstroBWT Proof-Of-Work algorithm

AstroBWT AstroBWT is a proof-of-work (PoW) algorithm based on Burrows-Wheeler transform (BWT). Developed and used by the DERO Project for Mobile (arm)

null 6 Mar 7, 2022