Rust Ethereum 2.0 Client

Overview

Lighthouse: Ethereum 2.0

An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime.

Build Status Book Status Chat Badge

Documentation

Banner

Overview

Lighthouse is:

  • Ready for use on Eth2 mainnet.
  • Fully open-source, licensed under Apache 2.0.
  • Security-focused. Fuzzing techniques have been continuously applied and several external security reviews have been performed.
  • Built in Rust, a modern language providing unique safety guarantees and excellent performance (comparable to C++).
  • Funded by various organisations, including Sigma Prime, the Ethereum Foundation, ConsenSys, the Decentralization Foundation and private individuals.
  • Actively involved in the specification and security analysis of the Ethereum 2.0 specification.

Eth2 Deposit Contract

The Lighthouse team acknowledges 0x00000000219ab540356cBB839Cbe05303d7705Fa as the canonical Eth2 deposit contract address.

Documentation

The Lighthouse Book contains information for users and developers.

The Lighthouse team maintains a blog at lighthouse.sigmaprime.io which contains periodical progress updates, roadmap insights and interesting findings.

Branches

Lighthouse maintains two permanent branches:

  • stable: Always points to the latest stable release.
    • This is ideal for most users.
  • unstable: Used for development, contains the latest PRs.
    • Developers should base thier PRs on this branch.

Contributing

Lighthouse welcomes contributors.

If you are looking to contribute, please head to the Contributing section of the Lighthouse book.

Contact

The best place for discussion is the Lighthouse Discord server. Alternatively, you may use the sigp/lighthouse gitter.

Sign up to the Lighthouse Development Updates mailing list for email notifications about releases, network status and other important information.

Encrypt sensitive messages using our PGP key.

Donations

Lighthouse is an open-source project and a public good. Funding public goods is hard and we're grateful for the donations we receive from the community via:

  • Gitcoin Grants.
  • Ethereum address: 0x25c4a76E7d118705e7Ea2e9b7d8C59930d8aCD3b (donation.sigmaprime.eth).
Comments
  • BooleanBitfield needs to be made sane

    BooleanBitfield needs to be made sane

    There is an implementation of a Boolean Bitfield here:

    https://github.com/sigp/lighthouse/tree/master/boolean-bitfield

    It (kinda) does the job for now, but it really needs some work done. If you spend some time looking at it I think you'll soon find out what I mean. As an example;

    • There is a possibility of overflows: we return the number of bits as a usize, however there can theoretically be usize number of bytes meaning we can have 8 * usize bits.
    • It keeps track of the number of true bits as you flip bits on and off. I don't think this is ideal as most cases where we want to know the number of true bits, we'll be receiving some serialized bytes from somewhere else (e.g., p2p nodes) and will need to calculate it manually.

    On top of these two points, there's likely many chances for optimization.

    Required Functionality

    Get

    get(n: usize) -> Result<bool, Error>
    

    Get value at index n.

    Error if bit out-of-bounds (OOB) of underlying bytes.

    Set

    set(n: usize, val: bool) -> Result<(bool, Error>
    

    Set bit at index n. Returns the previous value if successful.

    Error if bit is OOB of underlying bytes.

    Highest Set Bit

    highest_set_bit() -> Option<usize>
    

    Returns the index of the highest set bit. Some(n) if a bit set set, None otherwise.

    Note: this is useful because we need to reject messages if an unnecessary bit is set (e.g. if there are 10 voters and the 11th bit is set

    Number of Underlying Bytes

    num_bytes() -> usize
    

    Returns the length of the underlying bytes.

    _Note: useful to reject bitfields that are larger than required (e.g., if there are eight voters and two bytes -- only one byte is necessary). _

    Number of Set Bits

    num_set_bits() -> usize
    

    Returns the total number of set bits (i.e., how many peeps voted).

    Note: I'm not 100% sure we'll use this but I suspect we will.

    help wanted good first issue good for bounty 
    opened by paulhauner 38
  • Beacon Node: Unable to recover from network fragmentation

    Beacon Node: Unable to recover from network fragmentation

    Description

    Given is a custom beacon-chain testnet with nodes in physical distinct network locations [A, B]. The nodes have both an ENR supplied in a testnet directory specified as boot-enr.yaml and (due to inability to sufficiently network through ENR) a multi-address command line flag.

    The ENR file looks like this: gist/d6eea3ea3356e41bde81864143284ce9#file-4-boot_enr-yaml

    The multi-address look like this:

    --libp2p-addresses /ip4/51.158.190.99/tcp/9000,/ip4/87.180.203.227/tcp/9000
    

    Version

    ~/.opt/lighthouse master*
    ❯ git log -n1
    commit f6a6de2c5d9e6fe83b6ded24bad93615f98e2163 (HEAD -> master, origin/master, origin/HEAD)
    Author: Sacha Saint-Leger <[email protected]>
    Date:   Mon Mar 23 09:21:53 2020 +0100
    
        Become a Validator guides: update (#928)
    
    ~/.opt/lighthouse master*
    ❯ rustc --version
    rustc 1.42.0 (b8cedc004 2020-03-09)
    

    Present Behaviour

    In case there is a network fragmentation between A and B, the nodes do not attempt to reconnect to each other. Both nodes know about each other both from the ENR and the multi-address format.

    Furthermore, if both A and B are validators, there happens a chain split with two different head slots.

    It's possible to reconnect the nodes by restarting the beacon chain nodes manually, however, the chains of A and B are now unable to reorganize to the best head and the peers ban each other.

    Mar 23 10:53:27.534 ERRO Disconnecting and banning peer          timeout: 30s, peer_id: PeerId("16Uiu2HAkxE6kBjfoGtSfhSJAE8oib6h3gM972pAj9brmthTHuuP2"), service: network
    Mar 23 10:53:27.612 ERRO Disconnecting and banning peer          timeout: 30s, peer_id: PeerId("16Uiu2HAmPz5xFwZfY4CYN6fxf3Yz6LQaDfzCUrA5qwCoTHoCSiNR"), service: network
    Mar 23 10:53:38.000 WARN Low peer count                          peer_count: 1, service: slot_notifier
    

    In this case, deleting either A's or B's beacon chain and temporarily stopping the validators is the only way to recover from this issue.

    Expected Behaviour

    The beacon chain node should aggressively try to maintain connections. It should keep trying to resolve ENR and multi-addresses even after a disconnect.

    I don't know how it's designed, but I imagine:

    1. having higher timeouts to prevent disconnects in case of short network fragmentation
    2. having repeated connection attempts even if previous attempts failed in case of longer or more severe network fragmentation

    I have not enough understanding of consensus to have a suggestion for how to handle the reorganization, but having a more stable network would certainly help here.

    Steps to resolve

    Manual restart including reset of the beacon chain data directory.

    opened by q9f 31
  • Add cache for selection proof signatures

    Add cache for selection proof signatures

    Issue Addressed

    #3216

    Proposed Changes

    Add a cache to each VC for selection proofs that have recently been computed. This ensures that duplicate selection proofs are processed quickly.

    Additional Info

    Maximum cache size is 64 signatures, at which point the oldest signature (or rather the signature referencing the oldest message) is removed to make room for the next signature.

    WIP: Still need to remove pre-computation selection proofs if these are no longer desired.

    work-in-progress 
    opened by macladson 28
  • [Merged by Bors] - [Altair] Sync committee pools

    [Merged by Bors] - [Altair] Sync committee pools

    Add pools supporting sync committees:

    • naive sync aggregation pool
    • observed sync contributions pool
    • observed sync contributors pool
    • observed sync aggregators pool

    Add SSZ types and tests related to sync committee signatures.

    ready-for-bors 
    opened by realbigsean 27
  • Lighthouse re-licensing: GPL 2.0 -> Apache 2.0

    Lighthouse re-licensing: GPL 2.0 -> Apache 2.0

    Sigma Prime core contributors (@paulhauner , @AgeManning , @spble , @kirk-baird, @michaelsproul and others) have recently been discussing licensing for Lighthouse. We would like to change the licensing on Lighthouse to a more permissive, less restrictive license: Apache 2.0 (presently GPL 2.0).

    We would like to request our contributors to provide their approval for this change. If you approve this re-licensing, please comment in this issue with the sentence "I agree with re-licensing my work on this project to Apache 2.0" or similar.

    If you disagree with this change and would like to discuss further, please feel free to do so here or reach out to us on Gitter.

    Thank you all for helping make Lighthouse an even more open project!

    opened by zedt3ster 25
  • Implement tree hashing function

    Implement tree hashing function

    Description

    Implement the function described here: https://github.com/ethereum/eth2.0-specs/issues/54

    Present Behaviour

    Function doesn't exist.

    Expected Behaviour

    Function should exist.

    Steps to resolve

    AFAIK, the function is still experimental so be on the lookout for bugs and optimisations.

    At this stage, I think we should implement it as a separate crate.

    work-started 
    opened by paulhauner 25
  • [Merged by Bors] - Use the forwards iterator more often

    [Merged by Bors] - Use the forwards iterator more often

    Issue Addressed

    NA

    Primary Change

    When investigating memory usage, I noticed that retrieving a block from an early slot (e.g., slot 900) would cause a sharp increase in the memory footprint (from 400mb to 800mb+) which seemed to be ever-lasting.

    After some investigation, I found that the reverse iteration from the head back to that slot was the likely culprit. To counter this, I've switched the BeaconChain::block_root_at_slot to use the forwards iterator, instead of the reverse one.

    I also noticed that the networking stack is using BeaconChain::root_at_slot to check if a peer is relevant (check_peer_relevance). Perhaps the steep, seemingly-random-but-consistent increases in memory usage are caused by the use of this function.

    Using the forwards iterator with the HTTP API alleviated the sharp increases in memory usage. It also made the response much faster (before it felt like to took 1-2s, now it feels instant).

    Additional Changes

    In the process I also noticed that we have two functions for getting block roots:

    • BeaconChain::block_root_at_slot: returns None for a skip slot.
    • BeaconChain::root_at_slot: returns the previous root for a skip slot.

    I unified these two functions into block_root_at_slot and added the WhenSlotSkipped enum. Now, the caller must be explicit about the skip-slot behaviour when requesting a root.

    Additionally, I replaced vec![] with Vec::with_capacity in store::chunked_vector::range_query. I stumbled across this whilst debugging and made this modification to see what effect it would have (not much). It seems like a decent change to keep around, but I'm not concerned either way.

    Also, BeaconChain::get_ancestor_block_root is unused, so I got rid of it :wastebasket:.

    Additional Info

    I haven't also done the same for state roots here. Whilst it's possible and a good idea, it's more work since the fwds iterators are presently block-roots-specific.

    Whilst there's a few places a reverse iteration of state roots could be triggered (e.g., attestation production, HTTP API), they're no where near as common as the check_peer_relevance call. As such, I think we should get this PR merged first, then come back for the state root iters. I made an issue here https://github.com/sigp/lighthouse/issues/2377.

    ready-for-bors 
    opened by paulhauner 22
  • [Merged by Bors] - Use async code when interacting with EL

    [Merged by Bors] - Use async code when interacting with EL

    Overview

    This rather extensive PR achieves two primary goals:

    1. Uses the finalized/justified checkpoints of fork choice (FC), rather than that of the head state.
    2. Refactors fork choice, block production and block processing to async functions.

    Additionally, it achieves:

    • Concurrent forkchoice updates to the EL and cache pruning after a new head is selected.
    • Concurrent "block packing" (attestations, etc) and execution payload retrieval during block production.
    • Concurrent per-block-processing and execution payload verification during block processing.
    • The Arc-ification of SignedBeaconBlock during block processing (it's never mutated, so why not?):
      • I had to do this to deal with sending blocks into spawned tasks.
      • Previously we were cloning the beacon block at least 2 times during each block processing, these clones are either removed or turned into cheaper Arc clones.
      • We were also Box-ing and un-Box-ing beacon blocks as they moved throughout the networking crate. This is not a big deal, but it's nice to avoid shifting things between the stack and heap.
      • Avoids cloning all the blocks in every chain segment during sync.
      • It also has the potential to clean up our code where we need to pass an owned block around so we can send it back in the case of an error (I didn't do much of this, my PR is already big enough :sweat_smile:)
    • The BeaconChain::HeadSafetyStatus struct was removed. It was an old relic from prior merge specs.

    For motivation for this change, see https://github.com/sigp/lighthouse/pull/3244#issuecomment-1160963273

    Changes to canonical_head and fork_choice

    Previously, the BeaconChain had two separate fields:

    canonical_head: RwLock<Snapshot>,
    fork_choice: RwLock<BeaconForkChoice>
    

    Now, we have grouped these values under a single struct:

    canonical_head: CanonicalHead {
      cached_head: RwLock<Arc<Snapshot>>,
      fork_choice: RwLock<BeaconForkChoice>
    } 
    

    Apart from ergonomics, the only actual change here is wrapping the canonical head snapshot in an Arc. This means that we no longer need to hold the cached_head (canonical_head, in old terms) lock when we want to pull some values from it. This was done to avoid deadlock risks by preventing functions from acquiring (and holding) the cached_head and fork_choice locks simultaneously.

    Breaking Changes

    The state (root) field in the finalized_checkpoint SSE event

    Consider the scenario where epoch n is just finalized, but start_slot(n) is skipped. There are two state roots we might in the finalized_checkpoint SSE event:

    1. The state root of the finalized block, which is get_block(finalized_checkpoint.root).state_root.
    2. The state root at slot of start_slot(n), which would be the state from (1), but "skipped forward" through any skip slots.

    Previously, Lighthouse would choose (2). However, we can see that when Teku generates that event it uses getStateRootFromBlockRoot which uses (1).

    I have switched Lighthouse from (2) to (1). I think it's a somewhat arbitrary choice between the two, where (1) is easier to compute and is consistent with Teku.

    Notes for Reviewers

    I've renamed BeaconChain::fork_choice to BeaconChain::recompute_head. Doing this helped ensure I broke all previous uses of fork choice and I also find it more descriptive. It describes an action and can't be confused with trying to get a reference to the ForkChoice struct.

    I've changed the ordering of SSE events when a block is received. It used to be [block, finalized, head] and now it's [block, head, finalized]. It was easier this way and I don't think we were making any promises about SSE event ordering so it's not "breaking".

    I've made it so fork choice will run when it's first constructed. I did this because I wanted to have a cached version of the last call to get_head. Ensuring get_head has been run at least once means that the cached values doesn't need to wrapped in an Option. This was fairly simple, it just involved passing a slot to the constructor so it knows when it's being run. When loading a fork choice from the store and a slot clock isn't handy I've just used the slot that was saved in the fork_choice_store. That seems like it would be a faithful representation of the slot when we saved it.

    I added the genesis_time: u64 to the BeaconChain. It's small, constant and nice to have around.

    Since we're using FC for the fin/just checkpoints, we no longer get the 0x00..00 roots at genesis. You can see I had to remove a work-around in ef-tests here: b56be3bc2. I can't find any reason why this would be an issue, if anything I think it'll be better since the genesis-alias has caught us out a few times (0x00..00 isn't actually a real root). Edit: I did find a case where the network expected the 0x00..00 alias and patched it here: 3f26ac3e2.

    You'll notice a lot of changes in tests. Generally, tests should be functionally equivalent. Here are the things creating the most diff-noise in tests:

    • Changing tests to be tokio::async tests.
    • Adding .await to fork choice, block processing and block production functions.
    • Refactor of the canonical_head "API" provided by the BeaconChain. E.g., chain.canonical_head.cached_head() instead of chain.canonical_head.read().
    • Wrapping SignedBeaconBlock in an Arc.
    • In the beacon_chain/tests/block_verification, we can't use the lazy_static CHAIN_SEGMENT variable anymore since it's generated with an async function. We just generate it in each test, not so efficient but hopefully insignificant.

    I had to disable rayon concurrent tests in the fork_choice tests. This is because the use of rayon and block_on was causing a panic.

    ready-for-bors backwards-incompat bellatrix 
    opened by paulhauner 21
  • FailedToInsertDeposit

    FailedToInsertDeposit

    Description

    I receive this error in my logs, over and over:

    {
      "msg": "Failed to update eth1 cache",
      "level": "ERRO",
      "ts": "2020-05-27T09:02:20.147080658-05:00",
      "service": "eth1_rpc",
      "error": "Failed to update eth1 cache: FailedToInsertDeposit(NonConsecutive { log_index: 52, expected: 51 })",
      "retry_millis": "7000"
    }
    
    

    I run lighthouse with the following command: /home/lighthouse/.cargo/bin/lighthouse --logfile /data/lighthouse/logs/beacon.log beacon_node --eth1 --http --ws --datadir /data/lighthouse --http-address 0.0.0.0 --ws-address 0.0.0.0

    I run a Nethermind eth1 client, archive sync to Goerli, version 1.8.40

    I will say that this is as far as I got, I'm not running a validator yet. I wanted the beacon node to be up and running and in sync before I moved on to the next steps...

    Version

    Lighthouse 0.1.2

    Present Behaviour

    It's not clear to me what this log means, I searched for it but I couldn't find it.

    Expected Behaviour

    Unknown

    opened by MysticRyuujin 21
  • InsufficientPeers resulting in missing sync commitees

    InsufficientPeers resulting in missing sync commitees

    Description

    Our lighthouse nodes occasionally start to miss attestations and today also missed sync committees. Looking at the beaconchain node logs, it seems that it has issues with publishing these messages because of Insufficient Peers, despite being connected to 50+ peers. Restarting the node solves the issue.

    Version

    Lighthouse v2.0.1 (using vouch as the validator client)

    Present Behaviour

    Describe the present behaviour of the application, with regards to this issue.

    Dec 07 13:30:41 lhs-val01 lighthouse[997]: Dec 07 13:30:41.000 INFO Synced                                  slot: 2671351, block: 0xb807…9b44, epoch: 83479, finalized_epoch: 83477, finalized_root: 0x0e20…8ae0, peers: 55, service: slot_notifier
    Dec 07 13:30:49 lhs-val01 lighthouse[997]: Dec 07 13:30:49.652 INFO New block received                      hash: 0x81272318e6771c48a8cdd656255c1c61afd45d9a6cc5d037d28059235600e05e, slot: 2671352
    Dec 07 13:30:50 lhs-val01 lighthouse[997]: Dec 07 13:30:50.467 WARN Could not publish message               error: InsufficientPeers, service: libp2p
    Dec 07 13:30:51 lhs-val01 lighthouse[997]: Dec 07 13:30:51.487 WARN Could not publish message               error: InsufficientPeers, service: libp2p
    Dec 07 13:30:51 lhs-val01 lighthouse[997]: Dec 07 13:30:51.490 WARN Could not publish message               error: InsufficientPeers, service: libp2p
    Dec 07 13:30:51 lhs-val01 lighthouse[997]: Dec 07 13:30:51.491 WARN Could not publish message               error: InsufficientPeers, service: libp2p
    

    Steps to resolve

    Restarting the beaconchain node usually resolves the issue.

    bug 
    opened by fkbenjamin 20
  • [Merged by Bors] - Reduce outbound requests to eth1 endpoints

    [Merged by Bors] - Reduce outbound requests to eth1 endpoints

    Issue Addressed

    #2282

    Proposed Changes

    Reduce the outbound requests made to eth1 endpoints by caching the results from eth_chainId and net_version. Further reduce the overall request count by increasing auto_update_interval_millis from 7_000 (7 seconds) to 60_000 (1 minute). This will result in a reduction from ~2000 requests per hour to 360 requests per hour (during normal operation). A reduction of 82%.

    Additional Info

    If an endpoint fails, its state is dropped from the cache and the eth_chainId and net_version calls will be made for that endpoint again during the regular update cycle (once per minute) until it is back online.

    ready-for-bors 
    opened by macladson 20
  • Consensus context with proposer index caching

    Consensus context with proposer index caching

    Issue Addressed

    Closes https://github.com/sigp/lighthouse/issues/2371

    Proposed Changes

    Backport some changes from tree-states that remove duplicated calculations of the proposer_index.

    With this change the proposer index should be calculated only once for each block, and then plumbed through to every place it is required.

    Additional Info

    In future I hope to add more data to the consensus context that is cached on a per-epoch basis, like the effective balances of validators and the base rewards.

    There are some other changes to remove indexing in tests that were also useful for tree-states (the tree-states types don't implement Index).

    ready-for-review optimization 
    opened by michaelsproul 2
  • Add 'light' beacon node configuration options

    Add 'light' beacon node configuration options

    This PR adds two new cli flags that can be used to run a light beacon node without backfilling beacon blocks or creating the deposit cache. Useful if used in conjunction with a weak subjectivity checkpoint.

    waiting-on-author 
    opened by pinkiebell 2
  • Remove fallback support from eth1 service

    Remove fallback support from eth1 service

    Issue Addressed

    N/A

    Proposed Changes

    With https://github.com/sigp/lighthouse/pull/3214 we made it such that you can either have 1 auth endpoint or multiple non auth endpoints. Now that we are post merge on all networks (testnets and mainnet), we cannot progress a chain without a dedicated auth execution layer connection so there is no point in having a non-auth eth1-endpoint for syncing deposit cache.

    This code removes all fallback related code in the eth1 service. We still keep the single non-auth endpoint since it's useful for testing.

    Additional Info

    This removes all eth1 fallback related metrics that were relevant for the monitoring service, so we might need to change the api upstream.

    ready-for-review 
    opened by pawanjay176 0
  • CLI tests for logging flags

    CLI tests for logging flags

    Description

    Two recent PRs have added new logging flags which can't be tested by our current CLI testing framework due to the LoggerConfig not being part of the client::Config that gets persisted to disk.

    The flags that would be good to test are:

    • https://github.com/sigp/lighthouse/pull/3538
    • https://github.com/sigp/lighthouse/pull/3586

    Steps to resolve

    Add the LoggerConfig to client::Config here: https://github.com/sigp/lighthouse/blob/aa022f46855df2a1420a6a80a788c73dc2779aa7/beacon_node/client/src/config.rs#L44

    Do some plumbing so that the logger config is copied into the client config on startup (probably in lighthouse/src/main.rs).

    Add two tests for each of the new flags: one checking the default value of the config without the flag and another checking that the config changes when the flag is provided (in lighthouse/tests/beacon_node.rs).

    good first issue test improvement 
    opened by michaelsproul 0
  • ERRO Aggregate attestation queue full

    ERRO Aggregate attestation queue full

    I am running lightouse version 3.1.0 and syncing from snapshot using following command:

    /home/eth/node_beacon/lighthouse --network mainnet beacon_node --datadir=/home/eth/.ethereum --http --execution-endpoint=http://127.0.0.1:8551 --execution-jwt=/home/eth/node_beacon/jwt.hex --checkpoint-sync-url=XXX
    

    It worked fine for few hours, got synced, but now it stopped syncing and is throwing following error:

    lighthouse[32463]: Sep 16 12:13:21.443 ERRO Aggregate attestation queue full        queue_len: 4096, msg: the system has insufficient resources for load
    

    What should I do?

    opened by sealbox 3
Releases(v3.1.0)
Owner
Sigma Prime
Blockchain & Information Security Services
Sigma Prime
Tools for concurrent programming in Rust

Crossbeam This crate provides a set of tools for concurrent programming: Atomics AtomicCell, a thread-safe mutable memory location.(no_std) AtomicCons

Crossbeam 5.3k Sep 19, 2022
Abstract over the atomicity of reference-counting pointers in rust

Archery Archery is a rust library that offers a way to abstraction over Rc and Arc smart pointers. This allows you to create data structures where the

Diogo Sousa 101 Sep 19, 2022
Rayon: A data parallelism library for Rust

Rayon Rayon is a data-parallelism library for Rust. It is extremely lightweight and makes it easy to convert a sequential computation into a parallel

null 7.2k Sep 21, 2022
Coroutine Library in Rust

coroutine-rs Coroutine library in Rust [dependencies] coroutine = "0.8" Usage Basic usage of Coroutine extern crate coroutine; use std::usize; use co

Rust中文社区 400 Sep 23, 2022
Coroutine I/O for Rust

Coroutine I/O Coroutine scheduling with work-stealing algorithm. WARN: Possibly crash because of TLS inline, check https://github.com/zonyitoo/coio-rs

ty 454 Sep 7, 2022
Cross-platform Rust wrappers for the USB ID Repository

usb-ids Cross-platform Rust wrappers for the USB ID Repository. This library bundles the USB ID database, allowing platforms other than Linux to query

William Woodruff 15 Mar 29, 2022
Rust Parallel Iterator With Output Sequential Consistency

par_iter_sync: Parallel Iterator With Sequential Output Crate like rayon do not offer synchronization mechanism. This crate provides easy mixture of p

Congyu 1 Oct 30, 2021
Implementação de uma Skip List em Rust

SkipList SkipList é uma estrutura descrita em 1989 por William Pugh que se baseia em balancear de forma probabilística atalhos de um item a outro com

Rodrigo Crispim 3 Apr 27, 2022
Rust Ethereum 2.0 Client

Lighthouse: Ethereum 2.0 An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime. Documentation Overview Lighthouse is: Read

Sigma Prime 2k Sep 22, 2022
Rust Ethereum 2.0 Client

Lighthouse: Ethereum 2.0 An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime. Documentation Overview Lighthouse is: Read

Sigma Prime 2k Sep 25, 2022
Rust Ethereum 2.0 Client

Lighthouse: Ethereum 2.0 An open-source Ethereum 2.0 client, written in Rust and maintained by Sigma Prime. Documentation Overview Lighthouse is: Read

Sigma Prime 2k Sep 26, 2022
Rust client to Opensea's APIs and Ethereum smart contracts

opensea.rs Rust bindings & CLI to the Opensea API and Contracts CLI Usage Run cargo r -- --help to get the top level help menu: opensea-cli 0.1.0 Choo

Georgios Konstantopoulos 211 Sep 10, 2022
Next-generation implementation of Ethereum protocol ("client") written in Rust, based on Erigon architecture.

?? Martinez ?? Next-generation implementation of Ethereum protocol ("client") written in Rust, based on Erigon architecture. Why run Martinez? Look at

Arthur·Thomas 23 Jul 3, 2022
Ethereum JSON-RPC multi-transport client. Rust implementation of web3 library

Ethereum JSON-RPC multi-transport client. Rust implementation of web3 library. ENS address: rust-web3.eth

Tomasz Drwięga 1.1k Sep 22, 2022
The fast, light, and robust client for the Ethereum mainnet.

OpenEthereum Fast and feature-rich multi-network Ethereum client. » Download the latest release « Table of Contents Description Technical Overview Bui

OpenEthereum 1.6k Sep 21, 2022
The fast, light, and robust client for the Ethereum mainnet.

OpenEthereum Fast and feature-rich multi-network Ethereum client. » Download the latest release « Table of Contents Description Technical Overview Bui

OpenEthereum 1.6k Sep 25, 2022
The fast, light, and robust client for the Ethereum mainnet.

OpenEthereum Fast and feature-rich multi-network Ethereum client. » Download the latest release « Table of Contents Description Technical Overview Bui

OpenEthereum 1.6k Sep 21, 2022
The fast, light, and robust client for Ethereum-like networks.

The Fastest and most Advanced Ethereum Client. » Download the latest release « Table of Contents Description Technical Overview Building 3.1 Building

OpenEthereum 6.7k Sep 20, 2022
The Fastest and most Advanced Ethereum Client

The Fastest and most Advanced Ethereum Client. » Download the latest release « Table of Contents Description Technical Overview Building 3.1 Building

Jay Lee 6 Feb 17, 2022
Custom Ethereum vanity address generator made in Rust

ethaddrgen Custom Ethereum address generator Get a shiny ethereum address and stand out from the crowd! Disclaimer: Do not use the private key shown i

Jakub Hlusička 144 Sep 5, 2022