The lightest distributed consensus library. Run your own replicated state machine! ❤️

Overview

Little Raft

The lightest distributed consensus library. Run your own replicated state machine! ❤️

Installing

Simply import the crate. In your Cargo.toml, add

[dependencies]
little_raft = "0.1"

Using

To start running Little Raft, you only need to do three things.

  1. Implement the StateMachine that you want your cluster to maintain. Little Raft will take care of replicating this machine across the cluster and achieving consensus on its state.
/// StateMachine describes a user-defined state machine that is replicated
/// across the cluster. Raft can Replica whatever distributed state machine can
/// implement this trait.
pub trait StateMachine

   where
    T: StateMachineTransition,
{
    
   /// This is a hook that the local Replica will call each time the state of a
    
   /// particular transition changes. It is up to the user what to do with that
    
   /// information.
    
   fn 
   register_transition_state(
   &
   mut 
   self, transition_id: T::TransitionID, state: TransitionState);

    
   /// When a particular transition is ready to be applied, the Replica will
    
   /// call apply_transition to apply said transition to the local state
    
   /// machine.
    
   fn 
   apply_transition(
   &
   mut 
   self, transition: T);

    
   /// This function is used to receive transitions from the user that need to
    
   /// be applied to the replicated state machine. Note that while all Replicas
    
   /// poll get_pending_transitions periodically, only the Leader Replica
    
   /// actually processes them. All other Replicas discard pending transitions.
    
   /// get_pending_transitions must not return the same transition twice.
    
   fn 
   get_pending_transitions(
   &
   mut 
   self) -> 
   Vec
   
    ;
}
   
  
  1. Implement the Cluster abstraction so that the local Replica can communicate with other nodes.
/// Cluster is used for the local Raft Replica to communicate with the rest of
/// the Raft cluster. It is up to the user how to abstract that communication.
/// The Cluster trait also contains hooks which the Replica will use to inform
/// the crate user of state changes.
pub trait Cluster

   where
    T: StateMachineTransition,
{
    
   /// This function is used to deliver messages to target Replicas. The
    
   /// Replica will provide the to_id of the other Replica it's trying to send
    
   /// its message to and provide the message itself. The send_message
    
   /// implementation must not block but is allowed silently fail -- Raft
    
   /// exists to achieve consensus in spite of failures, after all.
    
   fn 
   send_message(
   &
   mut 
   self, to_id: 
   usize, message: Message
   
    );

    
    /// This function is used by the Replica to receive pending messages from
    
    /// the cluster. The receive_messages implementation must not block and must
    
    /// not return the same message more than once.
    
    fn 
    receive_messages(
    &
    mut 
    self) -> 
    Vec
    
     
      >;

    
      /// By returning true from halt you can signal to the Replica that it should
    
      /// stop running.
    
      fn 
      halt(
      &
      self) -> 
      bool;

    
      /// This function is a hook that the Replica uses to inform the user of the
    
      /// Leader change. The leader_id is an Option
       
         because the Leader
       
    
      /// might be unknown for a period of time. Remember that only Leaders can
    
      /// process transitions submitted by the Raft users, so the leader_id can be
    
      /// used to redirect the requests from non-Leader nodes to the Leader node.
    
      fn 
      register_leader(
      &
      mut 
      self, leader_id: 
      Option
      
       );
}
      
     
    
   
  
  1. Start your replica!
    /// Create a new Replica.
    ///
    /// id is the ID of this Replica within the cluster.
    ///
    /// peer_ids is a vector of IDs of all other Replicas in the cluster.
    ///
    /// cluster represents the abstraction the Replica uses to talk with other
    /// Replicas.
    ///
    /// state_machine is the state machine that Raft maintains.
    ///
    /// noop_transition is a transition that can be applied to the state machine
    /// multiple times with no effect.
    ///
    /// heartbeat_timeout defines how often the Leader Replica sends out
    /// heartbeat messages.
    ///
    /// election_timeout_range defines the election timeout interval. If the
    /// Replica gets no messages from the Leader before the timeout, it
    /// initiates an election.
    ///
    /// In practice, pick election_timeout_range to be 2-3x the value of
    /// heartbeat_timeout, depending on your particular use-case network latency
    /// and responsiveness needs. An election_timeout_range / heartbeat_timeout
    /// ratio that's too low might cause unwarranted re-elections in the
    /// cluster.
    pub fn new(
        id: ReplicaID,
        peer_ids: Vec
   ,
        cluster: Arc
   
    
     >,
        state_machine: Arc
     
      >,
        noop_transition: T,
        heartbeat_timeout: Duration,
        election_timeout_range: (Duration, Duration),
    ) -> Replica
       
        ; 
        /// This function starts the Replica and blocks forever. 
        /// 
        /// recv_msg is a channel on which the user must notify the Replica whenever 
        /// new messages from the Cluster are available. The Replica will not poll 
        /// for messages from the Cluster unless notified through recv_msg. 
        /// 
        /// recv_msg is a channel on which the user must notify the Replica whenever 
        /// new messages from the Cluster are available. The Replica will not poll 
        /// for messages from the Cluster unless notified through recv_msg. 
        /// 
        /// recv_transition is a channel on which the user must notify the Replica 
        /// whenever new transitions to be processed for the StateMachine are 
        /// available. The Replica will not poll for pending transitions for the 
        /// StateMachine unless notified through recv_transition. 
        pub 
        fn 
        start(
        &
        mut 
        self, recv_msg: Receiver<()>, recv_transition: Receiver<()>);
       
     
    
   
  

With that, you're good to go. We are working on examples, but for now you can look at the little_raft/tests directory and at the documentation at https://docs.rs/little_raft/0.1.3/little_raft/. We're working on adding more tests.

Testing

Run cargo test.

Contributing

Contributions are very welcome! Do remember that one of the goals of this library is to be as small and simple as possible. Let's keep the code in little_raft/src under 1,000 lines. PRs breaking this rule will be declined.

> cloc little_raft/src
       6 text files.
       6 unique files.                              
       0 files ignored.

github.com/AlDanial/cloc v 1.90  T=0.02 s (369.2 files/s, 56185.0 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Rust                             6             82            199            632
-------------------------------------------------------------------------------
SUM:                             6             82            199            632
-------------------------------------------------------------------------------

You are welcome to pick up and work on any of the issues open for this project. Or you can submit new issues if anything comes up from your experience using this library.

Comments
  • Making Little Raft able to handle out-of-order, duplicate, and lost messages

    Making Little Raft able to handle out-of-order, duplicate, and lost messages

    Addressing an issue where when next_index for a peer is discovered to be 0, the heartbeat message sending call of the leader will panic when trying to set prev_log_index to -1. Fix by setting prev_log_index to 0 when next_index is 0.

    opened by andreev-io 8
  • Snapshotting and log compaction

    Snapshotting and log compaction

    This PR implements section 7 of the Raft paper, i.e. Log Compaction. Now, Replicas will merge logs into snapshots to avoid accumulating too many logs that hoard memory. Replicas now support InstallSnapshot RPCs, meaning that a Leader can detect when a Follower is so far behind that the logs it's missing have already been compacted, and then transmit the snapshot to the follower instead of transmitting a particular log entry. Non-leader nodes now know how to treat and respond to InstallSnapshot RPCs.

    The StateMachine trait has been extended to provide Little Raft user with ability to save and load state to and from permanent storage. The raft_unstable test has been updated to randomly drop and shuffle messages, causing Replicas to retransmit snapshots, asserting the behavior of the new InstallSnapshot RPC.

    opened by andreev-io 7
  • Extend TransitionAbandonedReason enum

    Extend TransitionAbandonedReason enum

    Currently the enum is

    pub enum TransitionAbandonedReason {
        // NotLeader transitions have been abandoned because the replica is not
        // the cluster leader.
        NotLeader,
    }
    

    However, transitions can also be dropped by a replica if at some point it gets disconnected from majority of the cluster thinking it is still a leader accepting transitions; upon reconnecting to the rest of the cluster, such a replica will drop uncommitted transitions due to a discrepancy with consensus achieved by the cluster majority.

    enhancement good first issue 
    opened by andreev-io 6
  • Add Abandoned type to TransitionState enum

    Add Abandoned type to TransitionState enum

    Raft is an asynchronous protocol, so one of the challenges when using it is getting feedback on whether a particular transition has or hasn't been applied. To solve this problem, Little Raft calls the user-defined register_transition_state hook every time any transition changes its state. This way the library user can keep track of transitions as they move through the Queued -> Committed -> Applied pipeline.

    However, a transition could be ignored by the replica it has been submitted to. The most likely reason for that is because the replica is not the cluster leader. Another possible reason is that the replica used to be the leader, then got disconnected from the rest of the cluster, kept accepting transitions for processing for a while and then had to drop them when connecting back to the cluster that in the meanwhile elected another, newer leader (the stale leader dropping uncommitted transitions is the desired behavior in this case).

    To let the user know when a transition got dropped, we should add Abandoned state to the TransitionState enum. We could also have that Abandoned state wrap around another type so that we could signal the particular reason why a transition has been abandoned -- NotLeader vs UncommittedUnsynced. Naming could be improved.

    enhancement good first issue 
    opened by andreev-io 4
  • Don't require Copy for StateMachineTransition

    Don't require Copy for StateMachineTransition

    Heap-allocated things like Strings and Vecs are not copyable so don't require Copy trait for StateMachineTransition to simplify state machine transition implementation.

    opened by penberg 2
  • Deadlock when acquiring cluster lock while appending entry as follower

    Deadlock when acquiring cluster lock while appending entry as follower

    I am seeing a deadlock where process_append_entry_request_as_follower() is unable to grab cluster.lock() when attempting to send an append entry response.

    I am not sure if the scenario matters, but I am seeing this during initial leader election where two nodes first become candidates, then the other one wins, but the other one -- after becoming a follower -- never manages to register the other one as a leader and send a response to heartbeat because it just waits indefinitely for cluster.lock() to be released.

    I don't understand why but reducing the scope of state_machine.lock() seems to cure the issue:

    diff --git a/little_raft/src/replica.rs b/little_raft/src/replica.rs
    index c13219e..5168743 100644
    --- a/little_raft/src/replica.rs
    +++ b/little_raft/src/replica.rs
    @@ -705,23 +705,28 @@ where
                 return;
    @@ -705,23 +705,28 @@ where
                 return;
             }
    
    -        let mut state_machine = self.state_machine.lock().unwrap();
    -        for entry in entries {
    -            // Drop local inconsistent logs.
    -            if entry.index <= self.get_last_log_index()
    -            && entry.term != self.get_term_at_index(entry.index).unwrap() {
    -                for i in entry.index..self.log.len() {
    -                    state_machine.register_transition_state(
    -                        self.log[i].transition.get_id(),
    -                        TransitionState::Abandoned(TransitionAbandonedReason::ConflictWithLeader)
    -                    );
    -                }
    -                self.log.truncate(entry.index);
    -            }
    +        {
    +            let mut state_machine = self.state_machine.lock().unwrap();
    +            for entry in entries {
    +                // Drop local inconsistent logs.
    +                if entry.index <= self.get_last_log_index()
    +                    && entry.term != self.get_term_at_index(entry.index).unwrap()
    +                {
    +                    for i in entry.index..self.log.len() {
    +                        state_machine.register_transition_state(
    +                            self.log[i].transition.get_id(),
    +                            TransitionState::Abandoned(
    +                                TransitionAbandonedReason::ConflictWithLeader,
    +                            ),
    +                        );
    +                    }
    +                    self.log.truncate(entry.index);
    +                }
    
    -            // Push received logs.
    -            if entry.index == self.log.len() + self.index_offset {
    -                self.log.push(entry);
    +                // Push received logs.
    +                if entry.index == self.log.len() + self.index_offset {
    +                    self.log.push(entry);
    +                }
                 }
             }
    
    bug 
    opened by penberg 1
  • Implement log compaction

    Implement log compaction

    Section 7 of the Raft paper describes a much-needed optimization: log compaction. With it, the cluster will be able to snapshot its state from time to time so that replicas in the future don't have to replicate the entire transition log and so that storage space is preserved.

    Implementing this in Little Raft according to the paper is a good way to make Little Raft a production-grade library. This is a good feasible challenge for people that want to build important functionality of a real distributed system following a clear specification (the Raft paper section 7, in this case).

    enhancement 
    opened by andreev-io 1
  • Fix lock re-entrancy in process_append_entry_request_as_follower()

    Fix lock re-entrancy in process_append_entry_request_as_follower()

    In my project, I have a struct that implements both the Cluster and StateMachine traits, and, therefore, also share the same Mutex that's passed to Replica.

    However, process_append_entry_request_as_follower is holding on to the state machine lock while attempting to acquire the cluster lock, which results in re-entrancy that causes a deadlock (as reported by no_deadlocks crate):

    A reentrance has been attempted, but `std::sync`'s locks are not reentrant. This results in a deadlock. dependence cycle: [Thread(ThreadId(11)), Lock(20)]
    Lock taken at:
       0: little_raft::replica::Replica<C,M,T,D>::process_append_entry_request_as_follower
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:709:33
       1: little_raft::replica::Replica<C,M,T,D>::process_message_as_follower
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:766:18
       2: little_raft::replica::Replica<C,M,T,D>::process_message
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:352:32
       3: little_raft::replica::Replica<C,M,T,D>::poll_as_follower
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:332:21
       4: little_raft::replica::Replica<C,M,T,D>::start
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:233:36
    
    Reentrace at:
       0: little_raft::replica::Replica<C,M,T,D>::process_append_entry_request_as_follower
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:735:27
       1: little_raft::replica::Replica<C,M,T,D>::process_message_as_follower
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:766:18
       2: little_raft::replica::Replica<C,M,T,D>::process_message
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:352:32
       3: little_raft::replica::Replica<C,M,T,D>::poll_as_follower
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:332:21
       4: little_raft::replica::Replica<C,M,T,D>::start
                 at /Users/penberg/src/database/little-raft/little_raft/src/replica.rs:233:36
    

    Fix the deadlock by moving entry processing to a separate function, which reduces the scope of the state machine lock so that it is no longer held when we attempt to acquire the cluster lock.

    Fixes #42.

    opened by penberg 0
  • Broadcast new transitions immediately, not waiting for heartbeat timer

    Broadcast new transitions immediately, not waiting for heartbeat timer

    Raft Leader doesn't have to wait for the heartbeat timer to broadcast a pending transition. Let's broadcast as soon as transitions are queued up.

    Closes #24

    opened by andreev-io 0
  • Addressing mismatch_index being None

    Addressing mismatch_index being None

    The only way a panic on line 419 can occur is if all of the following is true:

    • the leader gets a response with success: false
    • the response was sent due to the leader's term being behind the follower's term
    • the leader itself is no longer behind the follower's term

    If the message takes too long to deliver (for example while the same leader gets re-elected), the leader theoretically might have enough time to advance ahead of the term specified in the follower's term. In this case Little Raft panics. Let's not panic by checking that mismatch_index is set to Some(...), and if not, ignore the follower's response.

    Closes #22

    opened by andreev-io 0
  • Panic is observed mismatch_index is None

    Panic is observed mismatch_index is None

    thread 'tokio-runtime-worker' panicked at 'called Option::unwrap() on a None value', /Users/penberg/.cargo/registry/src/github.com-1ecc6299db9ec823/little_raft-0.1.4/src/replica.rs:416:57 note: run with RUST_BACKTRACE=1 environment variable to display a backtrace Error: panic

    opened by andreev-io 0
  • Make StateMachine snapshotting methods optional by moving them to separate trait with default implementation

    Make StateMachine snapshotting methods optional by moving them to separate trait with default implementation

    Snapshotting in Raft is optional. However, Little Raft enforces that all users of the StateMachine trait implement get_snapshot, create_snapshot, and set_snapshot; the implementation can be a no-op if the user doesn't want to use snapshotting, but the code doesn't make that clear.

    We should figure out how to move snapshot-related methods into a separate trait that has a default no-op implementation that Little Raft users can use if they choose to avoid shapshotting.

    opened by andreev-io 0
  • Add support for cluster membership changes

    Add support for cluster membership changes

    This is section 6 of the Raft paper. We need to stop assuming that the cluster configuration is fixed and add support for nodes joining / leaving the cluster. Implementing this implies adding support for joint consensus.

    enhancement 
    opened by andreev-io 1
  • Simulate network delay in tests

    Simulate network delay in tests

    Pull request #18 fixes an issue with out-of-order or stray AppendEntries rejects, which is only trigger able when Little Raft is wired up with networked servers.

    Let's improve the tests to simulate network delay and partitioning to attempt to catch bugs like these.

    enhancement help wanted 
    opened by penberg 3
  • Create a Storage trait

    Create a Storage trait

    What happens when a new replica is added to the cluster? In the current design, other replicas won't learn about it unless they are restarted with the new peer_ids. But upon restart, the replicas and their state machines will lose their state unless the user builds some persistence themselves.

    To make this simpler for the user, we should add functionality for the replica to preserve some permanent state. This should exposed to the Little Raft user via a trait that they can implement as they wish. The Raft paper has a lot of good info on what state needs to be preserved and how, so a good first step would be implementing that, and then adding functionality to snapshot the state of the state machine.

    enhancement 
    opened by andreev-io 0
  • Add extensive tests with node failures, packet loss, and leader reelection

    Add extensive tests with node failures, packet loss, and leader reelection

    Tests in this project serve two goals: testing functionality and offering usage examples. There's a lot of meaningful work that can be done by adding good tests to Little Raft.

    Specifically, you'll want to test Little Raft functionality when replicas go down and up, when messages get lost on the network, and leaders get reelected. You might find it challenging to simulate nodes going up or down or getting connected and disconnected, but remember that in the tests the developer has control over how messages are passing between nodes -- emulating a disconnected node is as simple as not delivering any messages to it; simulating packet loss is as simple as dropping some packets randomly.

    You could also add integration tests where replicas are actual separate processes communicating over the network. The possibilities are endless.

    documentation good first issue 
    opened by andreev-io 0
Releases(v0.2.0)
  • v0.2.0(Jan 8, 2022)

    What's Changed

    • Removing a duplicate paragraph in comments and in the readme by @andreev-io in https://github.com/andreev-io/little-raft/pull/26
    • Create rust.yml by @andreev-io in https://github.com/andreev-io/little-raft/pull/29
    • Create greetings.yml by @andreev-io in https://github.com/andreev-io/little-raft/pull/30
    • Adding clippy linter by @andreev-io in https://github.com/andreev-io/little-raft/pull/31
    • refactor: reuse timeout channel creation by @suzaku in https://github.com/andreev-io/little-raft/pull/27
    • fix #13, Add Abandoned type to TransitionState enum by @suzaku in https://github.com/andreev-io/little-raft/pull/33
    • Use min by @suzaku in https://github.com/andreev-io/little-raft/pull/37
    • fix #34, record abandoned reason for conflicting entry logs by @suzaku in https://github.com/andreev-io/little-raft/pull/36
    • Snapshotting and log compaction by @andreev-io in https://github.com/andreev-io/little-raft/pull/32
    • v0.2.0 by @andreev-io in https://github.com/andreev-io/little-raft/pull/40

    New Contributors

    • @suzaku made their first contribution in https://github.com/andreev-io/little-raft/pull/27

    Full Changelog: https://github.com/andreev-io/little-raft/compare/v0.1.6...v0.2.0

    Source code(tar.gz)
    Source code(zip)
  • v0.1.6(Dec 1, 2021)

    What's Changed

    • Addressing mismatch_index being None by @andreev-io in https://github.com/andreev-io/little-raft/pull/23
    • Broadcast new transitions immediately, not waiting for heartbeat timer by @andreev-io in https://github.com/andreev-io/little-raft/pull/25
    Source code(tar.gz)
    Source code(zip)
Owner
Ilya Andreev
Computer Science at the University of Illinois. Systems Engineering at Cloudflare.
Ilya Andreev
Paxakos is a pure Rust implementation of a distributed consensus algorithm

Paxakos is a pure Rust implementation of a distributed consensus algorithm based on Leslie Lamport's Paxos. It enables distributed systems to consistently modify shared state across their network, even in the presence of failures.

Pavan Ananth Sharma 2 Jul 5, 2022
Fluvio is a high-performance distributed streaming platform that's written in Rust

Fluvio is a high-performance distributed streaming platform that's written in Rust, built to make it easy to develop real-time applications.

InfinyOn 1.6k Dec 30, 2022
A model checker for implementing distributed systems.

A model checker for implementing distributed systems.

Stateright Actor Framework 1.3k Dec 15, 2022
Magical Automatic Deterministic Simulator for distributed systems in Rust.

MadSim Magical Automatic Deterministic Simulator for distributed systems. Deterministic simulation MadSim is a Rust async runtime similar to tokio, bu

MadSys Research Group 249 Dec 28, 2022
Sorock is an experimental "so rocking" scale-out distributed object storage

Sorock is an experimental "so rocking" scale-out distributed object storage

Akira Hayakawa 6 Jun 13, 2022
A universal, distributed package manager

Cask A universal, distributed package manager. Installation | Usage | How to publish package? | Design | Contributing | Cask.toml If you are tired of:

null 39 Dec 30, 2022
A fully asynchronous, futures-based Kafka client library for Rust based on librdkafka

rust-rdkafka A fully asynchronous, futures-enabled Apache Kafka client library for Rust based on librdkafka. The library rust-rdkafka provides a safe

Federico Giraud 1.1k Jan 8, 2023
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
My own image file format created for fun! Install the "hif_opener.exe" to open hif files. clone the repo and compile to make your own hif file

Why am i creating this? I wanted to create my own image format since I was 12 years old using Windows 7, tryna modify GTA San Andreas. That day, when

hiftie 3 Dec 17, 2023
Damavand is a quantum circuit simulator. It can run on laptops or High Performance Computing architectures, such CPU distributed architectures or multi GPU distributed architectures.

Damavand is a quantum circuit simulator. It can run on laptops or High Performance Computing architectures, such CPU distributed architectures or multi GPU distributed architectures.

MichelNowak 0 Mar 29, 2022
Damavand is a quantum circuit simulator. It can run on laptops or High Performance Computing architectures, such CPU distributed architectures or multi GPU distributed architectures.

Damavand is a code that simulates quantum circuits. In order to learn more about damavand, refer to the documentation. Development status Core feature

prevision.io 6 Mar 29, 2022
A library that makes it VERY easy to run Holochain as a library, from your own binary, with great defaults

embedded-holochain-runner A library that makes it VERY easy to run Holochain as a library, from your own binary, with great defaults How it will work

Sprillow 14 Jul 23, 2022
State of the art "build your own engine" kit powered by gfx-hal

A rendering engine based on gfx-hal, which mimics the Vulkan API. Building This library requires standard build tools for the target platforms, except

Amethyst Foundation 801 Dec 28, 2022
Raft distributed consensus for WebAssembly in Rust

WRaft: Raft in WebAssembly What is this? A toy implementation of the Raft Consensus Algorithm for WebAssembly, written in Rust. Basically, it synchron

Emanuel Evans 60 Oct 22, 2022
Paxakos is a pure Rust implementation of a distributed consensus algorithm

Paxakos is a pure Rust implementation of a distributed consensus algorithm based on Leslie Lamport's Paxos. It enables distributed systems to consistently modify shared state across their network, even in the presence of failures.

Pavan Ananth Sharma 2 Jul 5, 2022
Raft distributed consensus algorithm implemented in Rust.

Raft Problem and Importance When building a distributed system one principal goal is often to build in fault-tolerance. That is, if one particular nod

TiKV Project 2.3k Dec 28, 2022
Replicated storage docker plugin.

Fractal Docker Plugin This plugin handles the creation and deletion of docker volumes backed by copy-on-write filesystems (BTRFS currently), regular s

Fractal Networks 5 Sep 2, 2022
A scalable message queue powered by a segmented, partitioned, replicated and immutable log.

A scalable message queue powered by a segmented, partitioned, replicated and immutable log. This is currently a work in progress. laminarmq is intende

Arindam Das 20 Dec 16, 2022
Rustato: A powerful, thread-safe global state management library for Rust applications, offering type-safe, reactive state handling with an easy-to-use macro-based API.

Rustato State Manager A generical thread-safe global state manager for Rust Introduction • Features • Installation • Usage • Advanced Usage • Api Refe

BiteCraft 8 Sep 16, 2024
Run your Rust CLI programs as state machines with persistence and recovery abilities

step-machine Run your CLI programs as state machines with persistence and recovery abilities. When such a program breaks you'll have opportunity to ch

Imbolc 31 Nov 23, 2022