An implementation of the paper "Honey Badger of BFT Protocols" in Rust. This is a modular library of consensus.

Overview

Honey Badger Byzantine Fault Tolerant (BFT) consensus algorithm

crates.io Documentation Build Status Gitter

Welcome to a Rust library of the Honey Badger Byzantine Fault Tolerant (BFT) consensus algorithm. The research and protocols for this algorithm are explained in detail in "The Honey Badger of BFT Protocols" by Miller et al., 2016.

An official security audit has been completed on hbbft by Jean-Philippe Aumasson.

Following is an overview of HoneyBadger BFT and basic instructions for getting started.

Note: This library is a work in progress and parts of the algorithm are still in development.

What is Honey Badger?

The Honey Badger consensus algorithm allows nodes in a distributed, potentially asynchronous environment to achieve agreement on transactions. The agreement process does not require a leader node, tolerates corrupted nodes, and makes progress in adverse network conditions. Example use cases are decentralized databases and blockchains.

Honey Badger is Byzantine Fault Tolerant. The protocol can reach consensus with a number of failed nodes f (including complete takeover by an attacker), as long as the total number N of nodes is greater than 3 * f.

Honey Badger is asynchronous. It does not make timing assumptions about message delivery. An adversary can control network scheduling and delay messages without impacting consensus.

How does it work?

Honey Badger is a modular library composed of several independent algorithms. To reach consensus, Honey Badger proceeds in epochs. In each epoch, participating nodes broadcast a set of encrypted data transactions to one another and agree on the contents of those transactions.

In an optimal networking environment, output includes data sent from each node. In an adverse environment, the output is an agreed upon subset of data. Either way, the resulting output contains a batch of transactions which is guaranteed to be consistent across all nodes.

In addition to validators, the algorithms support observers: These don't actively participate, and don't need to be trusted, but they receive the output as well, and are able to verify it under the assumption that more than two thirds of the validators are correct.

Please see the following posts for more details:

Algorithms

  • Honey Badger: Each node inputs transactions. The protocol outputs a sequence of batches of transactions.

  • Dynamic Honey Badger: A modified Honey Badger where nodes can dynamically add and remove other nodes to/from the network.

  • Queueing Honey Badger: Works exactly like Dynamic Honey Badger, but includes a built in transaction queue.

  • Subset: Each node inputs data. The nodes agree on a subset of suggested data.

  • Broadcast: A proposer node inputs data and every node receives this output.

  • Binary Agreement: Each node inputs a binary value. The nodes agree on a value that was input by at least one correct node.

  • Threshold Sign: Each node inputs the same data to be signed, and outputs the unique valid signature matching the public master key. It is used as a pseudorandom value in the Binary Agreement protocol.

  • Threshold Decryption: Each node inputs the same ciphertext, encrypted to the public master key, and outputs the decrypted data.

  • Synchronous Key Generation A dealerless algorithm that generates keys for threshold encryption and signing. Unlike the other algorithms, this one is completely synchronous and should run on top of Honey Badger (or another consensus algorithm)

External crates developed for this library

  • Threshold Crypto: A threshold cryptosystem for collaborative message decryption and signature creation.

Getting Started

This library requires a distributed network environment to function. Details on network requirements TBD.

Note: Additional examples are currently in progress.

Build

Requires Rust 1.36 or higher and cargo: installation instructions. The library is tested against the stable release channel.

$ cargo build [--release]

Testing

$ cargo test --release

See the tests README for more information on our testing toolkit.

Example Network Simulation

A basic example is included to run a network simulation.

$ cargo run --example simulation --release

Screenshot

Heading Definition
Epoch Epoch number. In each epoch, transactions are processed in a batch by simulated nodes (default is 10 nodes) on a network. The batch is always output in one piece, with all transactions at once.
Min Time Time in simulated milliseconds until the first correct (i.e. not faulty) node outputs the batch.
Max Time Time in simulated milliseconds until the last correct node outputs the batch.
Txs Number of transactions processed in the epoch.
Msgs/Node Average number of messages handled by a node. The counter is cumulative and includes the number of messages handled in the current epoch and all previous epochs.
Size/Node Average message size (in converted bytes) handled by a node. This is cumulative and includes message size for the current epoch and all previous epochs.

Options

Set different parameters to simulate different transaction and network conditions.

Flag Description
-h, --help Show help options
--version Show the version of hbbft
-n <n>, --nodes <n> The total number of nodes [default: 10]
-f <f>, --faulty <f> The number of faulty nodes [default: 0]
-t <txs>, --txs <txs> The number of transactions to process [default: 1000]
-b <b>, --batch <b> The batch size, i.e. txs per epoch [default: 100]
-l <lag>, --lag <lag> The network lag between sending and receiving [default: 100]
--bw <bw> The bandwidth, in kbit/s [default: 2000]
--cpu <cpu> The CPU speed, in percent of this machine's [default: 100]
--tx-size <size> The size of a transaction, in bytes [default: 10]

Examples:

# view options
$ cargo run --example simulation --release -- -h

# simulate a network with 12 nodes, 2 of which are faulty
$ cargo run --example simulation --release -- -n 12 -f 2

# increase batch size to 500 transactions per epoch
$ cargo run --example simulation --release -- -b 500

Protocol Modifications

Our implementation modifies the protocols described in "The Honey Badger of BFT Protocols" in several ways:

  • We use a pairing elliptic curve library to implement pairing-based cryptography using a Barrento-Lynn-Scott (BLS12-381) curve.
  • We add a Terminate message to the Binary Agreement algorithm. Termination occurs following output, preventing the algorithm from running (or staying in memory) indefinitely. (#53)
  • We add a Conf message to the Binary Agreement algorithm. An additional message phase prevents an attack if an adversary controls a network scheduler and a node. (#37)
  • We return additional information from the Subset and Honey Badger algorithms that specifies which node input which data. This allows for identification of potentially malicious nodes.
  • We include a Distributed Key Generation (DKG) protocol which does not require a trusted dealer; nodes collectively generate a secret key. This addresses the problem of single point of failure. See Distributed Key Generation in the Wild.

Algorithm naming conventions

We have simplified algorithm naming conventions from the original paper.

Algorithm Name Original Name
Honey Badger HoneyBadgerBFT
Subset Asynchronous Common Subset (ACS)
Broadcast Reliable Broadcast (RBC)
Binary Agreement Asynchronous Binary Byzantine Agreement (ABA)

References

Honey Badger Visualization

Screenshot

Contributing

See the CONTRIBUTING document for contribution, testing and pull request protocol.

License

Licensed under either of:

at your option.

Comments
  • Ported more integration tests to the new net simulator

    Ported more integration tests to the new net simulator

    Substantial progress towards resolving #322,

    Ported tests

    • threshold_sign.rs
    • subset.rs
    • broadcast.rs
    • honey_badger.rs

    Left to do:

    • queueing_honey_badger.rs
    • dynamic_honey_badger.rs (Or just delete—it's superseded by net_dynamic_hb.rs?)
    • sync_key_gen.rs (if applicable?)

    Findings

    To be reported/fixed as separate issues/pull requests.

    In general the new net simulator allows to write simpler and more compact code compared to the old network simulator. There were some issues I encountered while porting which are documented in the following sections.

    NetworkInfo hard to access in Adversaries

    I used the same strategy as employed in the already ported binary_agreement_mitm tests, using Arc to populate a shared map of NetworkInfo instances on test node creation to be able to access them in Adversary implementations.

    Access to each node's NetworkInfo instance should be provided by the net simulator to eliminate that workaround.

    broadcast test failing with ReorderingAdversary

    Initially I replaced the SilentAdversary::new(MessageScheduler::Random) with the new framework's ReorderingAdversary, which worked well for threshold_sign, but not at all for broadcast tests. When using ReorderingAdversary the message queue would in about 50% of the cases starve, or (in fewer cases) the adversary's broadcast would actually win.

    Analyzing the behavior of the "Random" scheduler in the old framework and re-implementing that behavior in a new function in adversary.rs, called "sort_by_random_node" made the tests pass again. This may point to a fragility in the broadcast algorithm implementation, although it could also be argued that the capabilities of ReorderingAdversary go beyond what an attacker would be capable of in the real world.

    To reproduce:

    In tests/broadcast.rs change the following line: MessageSorting::RandomPick => sort_by_random_node(&mut net, rng), to MessageSorting::RandomPick => swap_random(&mut net, rng),

    then run cargo test --release test_broadcast_random_delivery_adv_propose a couple of times

    opened by dforsten 37
  • Optimized broadcast #309

    Optimized broadcast #309

    WIP for #309

    • [x] Add the extra CanDecode and EchoHash messages in the Messages enum.
    • [x] Have pessimism factor g as a configurable parameter in the Broadcast struct.
    • [x] Write handle_echo_hash and handle_can_decode functions and modify other handle functions accordingly.
    • [ ] Change handle_value to prevent echoing a message back to proposer. Would save (N-1) messages regardless of g (original idea of the issue).
    • [x] Test normal flow of messages
    • [x] Test for expected reduction in number of messages with different values of g.
    • [ ] Additional testing
    opened by pawanjay176 36
  • Project building error

    Project building error

    Cannot build the project due to following error:

    Evgens-MacBook-Pro:hbbft user$ cargo build
       Compiling scopeguard v0.3.3
       Compiling smallvec v0.6.0
       Compiling cfg-if v0.1.2
       Compiling nodrop v0.1.12
       Compiling lazy_static v1.0.0
       Compiling crossbeam v0.3.2
       Compiling remove_dir_all v0.5.0
       Compiling stable_deref_trait v1.0.0
       Compiling either v1.5.0
       Compiling rayon-core v1.4.0
       Compiling lazy_static v0.2.11
       Compiling memoffset v0.1.0
       Compiling untrusted v0.5.1
       Compiling protobuf v1.5.1
       Compiling cc v1.0.10
       Compiling memoffset v0.2.1
       Compiling libc v0.2.40
       Compiling gcc v0.3.54
       Compiling log v0.4.1
       Compiling crossbeam-utils v0.2.2
       Compiling arrayvec v0.4.7
       Compiling owning_ref v0.3.3
       Compiling protoc v1.5.1
       Compiling num_cpus v1.8.0
       Compiling rand v0.4.2
       Compiling time v0.1.39
       Compiling crossbeam-epoch v0.3.1
       Compiling crossbeam-epoch v0.2.0
       Compiling reed-solomon-erasure v3.0.3
       Compiling simple_logger v0.5.0
       Compiling crossbeam-deque v0.2.0
       Compiling tempdir v0.3.7
       Compiling parking_lot_core v0.2.13
       Compiling parking_lot v0.4.8
       Compiling rayon v1.0.1
       Compiling rayon v0.8.2
       Compiling crossbeam-channel v0.1.2
       Compiling ring v0.12.1
       Compiling merkle v1.5.1-pre (https://github.com/vkomenda/merkle.rs?branch=public-proof#f34be0aa)
       Compiling protoc-rust v1.5.1
       Compiling hbbft v0.1.0 (file:///Projects/hbbft)
    error: failed to run custom build command for `hbbft v0.1.0 (file:///Projects/hbbft)`
    process didn't exit successfully: `/Projects/hbbft/target/debug/build/hbbft-b8fd1374e3e5d302/build-script-build` (exit code: 101)
    --- stdout
    cargo:rerun-if-changed=proto/message.proto
    
    --- stderr
    thread 'main' panicked at 'protoc: Error { repr: Os { code: 2, message: "No such file or directory" } }', src/libcore/result.rs:916:5
    stack backtrace:
       0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
                 at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
       1: std::panicking::default_hook::{{closure}}
                 at src/libstd/sys_common/backtrace.rs:68
                 at src/libstd/sys_common/backtrace.rs:57
                 at src/libstd/panicking.rs:381
       2: std::panicking::default_hook
                 at src/libstd/panicking.rs:397
       3: std::panicking::begin_panic
                 at src/libstd/panicking.rs:577
       4: std::panicking::begin_panic
                 at src/libstd/panicking.rs:538
       5: std::panicking::try::do_call
                 at src/libstd/panicking.rs:522
       6: std::panicking::try::do_call
                 at src/libstd/panicking.rs:498
       7: <core::ops::range::Range<Idx> as core::fmt::Debug>::fmt
                 at src/libcore/panicking.rs:71
       8: core::result::unwrap_failed
                 at /Users/travis/build/rust-lang/rust/src/libcore/macros.rs:23
       9: <core::result::Result<T, E>>::expect
                 at /Users/travis/build/rust-lang/rust/src/libcore/result.rs:809
      10: build_script_build::main
                 at ./build.rs:5
      11: std::rt::lang_start::{{closure}}
                 at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
      12: std::panicking::try::do_call
                 at src/libstd/rt.rs:59
                 at src/libstd/panicking.rs:480
      13: panic_unwind::dwarf::eh::read_encoded_pointer
                 at src/libpanic_unwind/lib.rs:101
      14: std::sys_common::bytestring::debug_fmt_bytestring
                 at src/libstd/panicking.rs:459
                 at src/libstd/panic.rs:365
                 at src/libstd/rt.rs:58
      15: std::rt::lang_start
                 at /Users/travis/build/rust-lang/rust/src/libstd/rt.rs:74
      16: build_script_build::main
    
    opened by EvgenKor 27
  • Overwrite `SecretKey` memory on drop

    Overwrite `SecretKey` memory on drop

    • Added the dependency clear_on_drop, which ensures that the compiler doesn't elide SecretKey's destructor.
    • Added impl Default for SecretKey.
      • Default is a trait bound for ClearOnDrop type.
    • Added impl Drop to NetworkInfo to overwrite SecretKey memory when going out of scope (uses ClearOnDrop type).
    opened by DrPeterVanNostrand 16
  • Limit message caches and future epochs?

    Limit message caches and future epochs?

    We need to ensure that an attacker can't fill up a node's memory by sending it lots of messages that it stores forever, particularly in the algorithms that proceed in epochs (agreement, Honey Badger, …) and that, to be fully asynchronous, would in theory need to store any information for any future epoch until they can process it.

    It's tempting to enforce some reasonable limits instead, and just assume that we'll never fall behind by more than 1000 epochs. But then an attacker could break consensus by sending a message that is exactly 1000 epochs ahead and will be handled by only some of the honest nodes.

    Either way, we should try to limit the amount of memory per epoch: E.g. in agreement, incoming_queue should be replaced by just the information which of the other nodes sent which of the four possible messages: BVal(true), BVal(false), Aux(true), Aux(false).

    • [X] Honey Badger
    • [ ] Binary Agreement
    Priority: High 
    opened by afck 14
  • Remove a random subset of validators in net_dynamic_hb

    Remove a random subset of validators in net_dynamic_hb

    related issue #374

    do_drop_and_re_add chooses at random at least 1 node for removing from the network and then re-adding removed nodes back. Cluster always remain correct. In other words before and after removing validators network correctness condition (N=3f+1) is always satisfied.

    Removed nodes can be as faulty nodes as well as correct ones. You can see in the log of the test how much is actually nodes will be removed.

    Max number of nodes for removing is 8, was chosen 2 nodes
    Will remove and re-add nodes {1, 2}
    
    opened by RicoGit 12
  • Change the license to MIT/Apache-2.0 by popular vote

    Change the license to MIT/Apache-2.0 by popular vote

    The provisional GNU GPL license is too strong for this library and should be replaced with MIT/Apache-2.0.

    To communicate approval, all project contributors (@afck, @vkomenda, @DrPeterVanNostrand, @mbr, @c0gent, @DemiMarie, @andogro, @igorbarinov, @ErichDonGubler) should reply in this thread with "I approve the license change to MIT/Apache-2.0"

    opened by andogro 12
  • Dynamic Honey Badger sender queue

    Dynamic Honey Badger sender queue

    Issue #43 needs an extra bit of design for queueing and dequeueing Honey Badger (HB) messages. An instance of Dynamic Honey Badger (DHB) contains an embedded instance of HB. This embedded HB instance is restarted on DKG change or completion events. When restarting it, any queued messages should be stored in the DHB instance for later delivery.

    opened by vkomenda 11
  • Fix sender queue messaging when a validator is removed and then rejoined

    Fix sender queue messaging when a validator is removed and then rejoined

    The sender queue currently does not handle the edge case of removal and rejoining the same validator in the next epoch correctly. The bug is as follows. The removed validator announces the epoch in which it is no longer a validator. Other nodes start sending messages for that epoch. If we start a ballot for adding back the same node as validator in that next epoch, we can no longer correctly restart the removed validator with the join plan because it has already had processed some of the messages for that epoch.

    With the current implementation of the sender queue this use case requires additional processing of messages, e.g., replaying those next epoch messages on the restarted node. There are better options.

    Messages to the removed validator should be queued correctly on all other validators starting from the epoch when it is no longer a validator. If the removed validator is going to rejoin in that epoch, it must not send the EpochStarted message for the epoch when it is no longer a validator until it receives a JoinPlan out of band and restarts. Once restarted, it sends out the EpochStarted message and other validators send the queued messages to it.

    This solution changes how removed validators are treated. They no longer become observers since they don't send any messages until they receive a JoinPlan.

    If you think there are better solutions or that we simply have to keep the removed validator as an observer by default, let's discuss it in this issue.

    opened by vkomenda 10
  • External sender queue implementation

    External sender queue implementation

    Fixes #226

    This PR implements a reference spam control mechanism - the sender queue - by requiring all peers implement post-processing of outgoing messages as follows:

    • Peer epochs are monitored.

    • An outgoing message is sent to a peer if and when the sender reckons the peer is at an epoch that matches the epoch of the message. If the peer is already ahead, the message is discarded by the sender instead of being sent.

    • Target::All messages may be split into Target::Node messages for each peer if peers happen to be at different epochs, some of which match the epoch of the message while others don't.

    Incoming message queues have been removed. Message recipients discard obsolete messages and return a Fault for messages with epochs too far ahead.

    The sender queue is agnostic to the differences between validators and observers. All it cares about is what the peer IDs are and what is the ID of the node it is running on. I believe this is approximately in line with Option 2 of @afck's comment.

    There are three implementations of a sender queue, one for each of HoneyBadger, DynamicHoneyBadger and QueueingHoneyBadger, although the last one uses the DynamicHoneyBadger implementation, so those two are essentially the same.

    Comments welcome: I'm not sure whether Epoched::Epoch is fine to stay Copy rather than Clone. Maybe I should have implemented an optional storage for the computed epoch and make Epoched::Epoch to be Clone instead to avoid possible recomputation of the epoch of the same message multiple times.

    opened by vkomenda 10
  • [Question] Can HBBFT achieve consensus on a fragmentary network?

    [Question] Can HBBFT achieve consensus on a fragmentary network?

    ==================2018-10-23============

    For example, I'm using wireguard to form a private network:
    A(my home PC) connected to B(VPS), 
    while B(VPS) connected to C(my office PC), 
    but A and C won't connect each other directly, 
    they are both behind NAT that wireguard can't traverse.
    Their VPN IPs are A(10.1.0.5), B(10.1.0.1), C(10.1.0.3).
    
    If i run commands:
    on A:   ./consensus-node --bind-address=10.1.0.5:9431 --remote-address=10.1.0.1:9431 --value=hello
    on B:./consensus-node --bind-address=10.1.0.1:9431 --remote-address=10.1.0.5:9431
    Then A and B responsed:
    Broadcast succeeded! Node 0 output: hello
    Broadcast succeeded! Node 1 output: hello
    
    If i run commands:
    on C: ./consensus-node --bind-address=10.1.0.3:9431 --remote-address=10.1.0.1:9431 --value=hello
    on B: ./consensus-node --bind-address=10.1.0.1:9431 --remote-address=10.1.0.3:9431
    Then C and B responsed:
    Broadcast succeeded! Node 0 output: hello
    Broadcast succeeded! Node 1 output: hello
    
    But if i run commands:
    on A:   ./consensus-node --bind-address=10.1.0.5:9431 --remote-address=10.1.0.1:9431 --remote-address=10.1.0.3:9431 --value=hello
    on B:./consensus-node --bind-address=10.1.0.1:9431 --remote-address=10.1.0.5:9431 --remote-address=10.1.0.3:9431
    On C:  ./consensus-node --bind-address=10.1.0.3:9431 --remote-address=10.1.0.1:9431 --remote-address=10.1.0.5:9431
    Then they won't achieve consensus until timeout.
    
    Can HBBFT achieve consensus on my fragmentary network?
    

    ==================2018-10-24============ new problem, weird, i can't initialize "value" on C, the strace details:

    on B, it is ok:

    $ md5sum consensus-node
    a585343cd9d4cea401bb29f7f822f1a5  consensus-node
    $ strace ./consensus-node --bind-address=10.1.0.1:9431 --remote-address=10.1.0.3:9431 --value hello
    .......
    write(1, "Args { bind_address: V4(10.1.0.1"..., 120Args { bind_address: V4(10.1.0.1:9431), remote_addresses: {V4(10.1.0.3:9431)}, value: Some([104, 101, 108, 108, 111]) }
    ) = 120
    socket(PF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_IP) = 3
    setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
    bind(3, {sa_family=AF_INET, sin_port=htons(9431), sin_addr=inet_addr("10.1.0.1")}, 16) = 0
    listen(3, 128)                          = 0
    futex(0x7ff6d35150a8, FUTEX_WAKE_PRIVATE, 2147483647) = 0
    accept4(3,
    

    on C, it will fail immediately:

    $ md5sum consensus-node
    a585343cd9d4cea401bb29f7f822f1a5  consensus-node
    $ strace ./consensus-node --bind-address=10.1.0.3:9431 --remote-address=10.1.0.1:9431 --value=hello
    .......
    write(1, "Args { bind_address: V4(10.1.0.3"..., 120Args { bind_address: V4(10.1.0.3:9431), remote_addresses: {V4(10.1.0.1:9431)}, value: Some([104, 101, 108, 108, 111]) }
    ) = 120
    socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_IP) = 3
    setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
    bind(3, {sa_family=AF_INET, sin_port=htons(9431), sin_addr=inet_addr("10.1.0.3")}, 16) = 0
    listen(3, 128)                          = 0
    socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_IP) = 4
    connect(4, {sa_family=AF_INET, sin_port=htons(9431), sin_addr=inet_addr("10.1.0.1")}, 16) = -1 ECONNREFUSED (Connection refused)
    close(4)
    
    opened by diyism 10
  • `failure` dependency CVE-2019-25010

    `failure` dependency CVE-2019-25010

    Hi, dependabot just informed me of HBBFT's failure dependency having a security relevant bug (CVE-2019-25010). We'll probably fix it in fedimint/hbbft at some point, are you guys still interested in upstream contributions or is the project pretty much dead (Iots of outdated dependencies, no activity etc.)?

    opened by elsirion 0
  • question regarding operation mutiple nodes

    question regarding operation mutiple nodes

    First of all, love that crate. I am learning Rust and developing a ( simple ) blockchain and want to use this consensus system !

    I have however a few understanding questions: 1 - when the network start, one node shall be build using the DynamicHoneybadgerBuilder::build first node. With that you can make a queueing honey badger. Other nodes do not know if someone else is there. They connect using P2P and generate a DynamicHoneyBadger object or wait to receive messages ? 2 - When a transaction / contribution is generated on the first node, a batch is created. Is it necessary to send to all the other node the batch or just the message in the step ? If only the message, how is validating the node the contribution ? Do the nodes have to have the same contributions ? 3 - when all the nodes agree on the contributions, what is the signal that it is completed ? No more message in steps ? Ready message ? I could not find any info or clues in the doc, maybe I did look at the right place ?

    Thanks again for the great work !

    opened by jejdouay 0
  • Question regarding missed batch

    Question regarding missed batch

    Hi, I'm trying to understand different BFT implementations. Just a question if anybody could help me out, specific to this library.

    Should there be a network failure that causes a node to miss a batch of contribution, what would happen? Does hbbft attempt to recover the missed batch or would the node fail forever from then on?

    Thanks for your time.

    opened by promptlyspeaking 1
  • API for use from other languages?

    API for use from other languages?

    Wondering how to make this work with other languages. Is there an API to consume this library from other languages, such as JS? Something like REST/HTTP or Websocket messaging / gRPC?

    opened by KrishnaPG 1
  • Error Cargo.toml

    Error Cargo.toml

    error: could not find Cargo.toml in /Users/salemalqahtani or any parent directory.

    I am trying to run the simulation on my mac computer and the above error appeared. I cannot work around it. Can anybody help me out?

    Sincerely

    opened by salemmohammed 1
Owner
We focus on the tools and infrastructure that matter most to blockchain users and developers.
null
🚣‍♀️ <1kloc, well-documented Raft consensus algorithm implementation

miniraft A <1kloc, well-documented Raft consensus algorithm implementation This crate is a minimal implementation of the Raft consensus protocol with

Jacky Zhao 25 Dec 1, 2022
EVM compatible chain with NPoS/PoC consensus

Reef Chain Reef chain is written in Rust. A basic familiarity with Rust tooling is required. To learn more about Reef chain, please refer to Documenta

Reef Finance 148 Dec 31, 2022
Substrate Consensus Handoff

Substrate Consensus Handoff Ethereum will soon migrate from Proof of Work to Proof of Stake. Preparing this miagration has taken years of planning to

Joshy Orndorff 1 Feb 18, 2022
Consensus layer peer-to-peer connection setup

Consensus Layer P2P This project is a basic setup for a consensus layer peer-to-peer connection, as specified in the consensus layer specifications of

Brechy 11 Dec 31, 2022
Kraken is a Starknet modular decentralized sequencer implementation.

Lambda Starknet Sequencer A Starknet decentralized sequencer implementation. Getting started The objective of this project is to create an L2 decentra

Lambdaclass 10 Jun 25, 2023
Multilayered Linkable Spontaneous Anonymous Group - Implemented as is from paper. Not Monero specific

MLSAG This is a pure Rust implementation of the Multilayered Linkable Spontaneous Anonymous Group construction. This implementation has not been revie

Crate Crypto 19 Dec 4, 2022
Highly modular & configurable hash & crypto library

Octavo Highly modular & configurable hash & crypto library written in pure Rust. Installation [dependencies] octavo = { git = "https://github.com/libO

Octavo Developers 139 Dec 29, 2022
Foundry is a blazing fast, portable and modular toolkit for Ethereum application development written in Rust.

foundry Foundry is a blazing fast, portable and modular toolkit for Ethereum application development written in Rust. Foundry consists of: Forge: Ethe

Georgios Konstantopoulos 5.1k Jan 9, 2023
A small and modular media manager

Bookshelf - a small and modular media manager Bookshelf is made for managing media, mainly books. Modules are to be made by the user (or stolen from t

Chocolate Overflow 33 Jul 21, 2022
dWallet Network, a composable modular signature network is the home of dWallets

Welcome to dWallet Network dWallet Network, a composable modular signature network is the home of dWallets. A dWallet is a noncollusive and massively

dWallet Labs 8 Feb 26, 2024
In addition to encryption library, pure RUST implementation of SSH-2.0 client protocol

In addition to encryption library, pure RUST implementation of SSH-2.0 client protocol

陈年旧事。 73 Jan 1, 2023
Ethereum JSON-RPC multi-transport client. Rust implementation of web3 library

Ethereum JSON-RPC multi-transport client. Rust implementation of web3 library. ENS address: rust-web3.eth

Tomasz Drwięga 1.2k Jan 8, 2023
High-level networking library that extends the bevy_replicon library to allow snapshot interpolation and client-side prediction

bevy_replicon_snap A Snapshot Interpolation plugin for the networking solution bevy_replicon in the Bevy game engine. This library is a very rough pro

Ben 3 Oct 15, 2023
IBC modules and relayer - Formal specifications and Rust implementation

ibc-rs Rust implementation of the Inter-Blockchain Communication (IBC) protocol. This project comprises primarily four crates: The ibc crate defines t

Informal Systems 296 Dec 31, 2022
A Rust implementation of BIP-0039

bip39-rs A Rust implementation of BIP0039 Changes See the changelog file, or the Github releases for specific tags. Documentation Add bip39 to your Ca

Infincia LLC 49 Dec 9, 2022
Official Rust implementation of the Nimiq protocol

Nimiq Core implementation in Rust (core-rs) Rust implementation of the Nimiq Blockchain Core Nimiq is a frictionless payment protocol for the web. Thi

Nimiq 72 Sep 23, 2022
Rust implementation of Zcash protocol

The Parity Zcash client. Gitter Blog: Parity teams up with Zcash Foundation for Parity Zcash client Installing from source Installing the snap Running

Parity Technologies 183 Sep 8, 2022
A (mostly) pure-Rust implementation of various cryptographic algorithms.

Rust-Crypto A (mostly) pure-Rust implementation of various common cryptographic algorithms. Rust-Crypto seeks to create practical, auditable, pure-Rus

null 1.2k Dec 27, 2022
A pure-Rust implementation of group operations on Ristretto and Curve25519

curve25519-dalek A pure-Rust implementation of group operations on Ristretto and Curve25519. curve25519-dalek is a library providing group operations

dalek cryptography 611 Dec 25, 2022