Parity-bridges-common - Collection of Useful Bridge Building Tools 🏗️

Overview

Parity Bridges Common

This is a collection of components for building bridges.

These components include Substrate pallets for syncing headers, passing arbitrary messages, as well as libraries for building relayers to provide cross-chain communication capabilities.

Three bridge nodes are also available. The nodes can be used to run test networks which bridge other Substrate chains.

🚧 The bridges are currently under construction - a hardhat is recommended beyond this point 🚧

Contents

Installation

To get up and running you need both stable and nightly Rust. Rust nightly is used to build the Web Assembly (WASM) runtime for the node. You can configure the WASM support as so:

rustup install nightly
rustup target add wasm32-unknown-unknown --toolchain nightly

Once this is configured you can build and test the repo as follows:

git clone https://github.com/paritytech/parity-bridges-common.git
cd parity-bridges-common
cargo build --all
cargo test --all

Also you can build the repo with Parity CI Docker image:

docker pull paritytech/bridges-ci:production
mkdir ~/cache
chown 1000:1000 ~/cache #processes in the container runs as "nonroot" user with UID 1000
docker run --rm -it -w /shellhere/parity-bridges-common \
                    -v /home/$(whoami)/cache/:/cache/    \
                    -v "$(pwd)":/shellhere/parity-bridges-common \
                    -e CARGO_HOME=/cache/cargo/ \
                    -e SCCACHE_DIR=/cache/sccache/ \
                    -e CARGO_TARGET_DIR=/cache/target/  paritytech/bridges-ci:production cargo build --all
#artifacts can be found in ~/cache/target

If you want to reproduce other steps of CI process you can use the following guide.

If you need more information about setting up your development environment Substrate's Getting Started page is a good resource.

High-Level Architecture

This repo has support for bridging foreign chains together using a combination of Substrate pallets and external processes called relayers. A bridge chain is one that is able to follow the consensus of a foreign chain independently. For example, consider the case below where we want to bridge two Substrate based chains.

+---------------+                 +---------------+
|               |                 |               |
|     Rialto    |                 |    Millau     |
|               |                 |               |
+-------+-------+                 +-------+-------+
        ^                                 ^
        |       +---------------+         |
        |       |               |         |
        +-----> | Bridge Relay  | <-------+
                |               |
                +---------------+

The Millau chain must be able to accept Rialto headers and verify their integrity. It does this by using a runtime module designed to track GRANDPA finality. Since two blockchains can't interact directly they need an external service, called a relayer, to communicate. The relayer will subscribe to new Rialto headers via RPC and submit them to the Millau chain for verification.

Take a look at Bridge High Level Documentation for more in-depth description of the bridge interaction.

Project Layout

Here's an overview of how the project is laid out. The main bits are the node, which is the actual "blockchain", the modules which are used to build the blockchain's logic (a.k.a the runtime) and the relays which are used to pass messages between chains.

├── bin             // Node and Runtime for the various Substrate chains
│  └── ...
├── deployments     // Useful tools for deploying test networks
│  └──  ...
├── diagrams        // Pretty pictures of the project architecture
│  └──  ...
├── modules         // Substrate Runtime Modules (a.k.a Pallets)
│  ├── grandpa      // On-Chain GRANDPA Light Client
│  ├── messages     // Cross Chain Message Passing
│  ├── dispatch     // Target Chain Message Execution
│  └──  ...
├── primitives      // Code shared between modules, runtimes, and relays
│  └──  ...
├── relays          // Application for sending headers and messages between chains
│  └──  ...
└── scripts         // Useful development and maintenance scripts

Running the Bridge

To run the Bridge you need to be able to connect the bridge relay node to the RPC interface of nodes on each side of the bridge (source and target chain).

There are 2 ways to run the bridge, described below:

  • building & running from source
  • running a Docker Compose setup (recommended).

Using the Source

First you'll need to build the bridge nodes and relay. This can be done as follows:

# In `parity-bridges-common` folder
cargo build -p rialto-bridge-node
cargo build -p millau-bridge-node
cargo build -p substrate-relay

Running a Dev network

We will launch a dev network to demonstrate how to relay a message between two Substrate based chains (named Rialto and Millau).

To do this we will need two nodes, two relayers which will relay headers, and two relayers which will relay messages.

Running from local scripts

To run a simple dev network you can use the scripts located in the deployments/local-scripts folder.

First, we must run the two Substrate nodes.

# In `parity-bridges-common` folder
./deployments/local-scripts/run-rialto-node.sh
./deployments/local-scripts/run-millau-node.sh

After the nodes are up we can run the header relayers.

./deployments/local-scripts/relay-millau-to-rialto.sh
./deployments/local-scripts/relay-rialto-to-millau.sh

At this point you should see the relayer submitting headers from the Millau Substrate chain to the Rialto Substrate chain.

# Header Relayer Logs
[Millau_to_Rialto_Sync] [date] DEBUG bridge Going to submit finality proof of Millau header #147 to Rialto
[...] [date] INFO bridge Synced 147 of 147 headers
[...] [date] DEBUG bridge Going to submit finality proof of Millau header #148 to Rialto
[...] [date] INFO bridge Synced 148 of 149 headers

Finally, we can run the message relayers.

./deployments/local-scripts/relay-messages-millau-to-rialto.sh
./deployments/local-scripts/relay-messages-rialto-to-millau.sh

You will also see the message lane relayers listening for new messages.

# Message Relayer Logs
[Millau_to_Rialto_MessageLane_00000000] [date] DEBUG bridge Asking Millau::ReceivingConfirmationsDelivery about best message nonces
[...] [date] INFO bridge Synced Some(2) of Some(3) nonces in Millau::MessagesDelivery -> Rialto::MessagesDelivery race
[...] [date] DEBUG bridge Asking Millau::MessagesDelivery about message nonces
[...] [date] DEBUG bridge Received best nonces from Millau::ReceivingConfirmationsDelivery: TargetClientNonces { latest_nonce: 0, nonces_data: () }
[...] [date] DEBUG bridge Asking Millau::ReceivingConfirmationsDelivery about finalized message nonces
[...] [date] DEBUG bridge Received finalized nonces from Millau::ReceivingConfirmationsDelivery: TargetClientNonces { latest_nonce: 0, nonces_data: () }
[...] [date] DEBUG bridge Received nonces from Millau::MessagesDelivery: SourceClientNonces { new_nonces: {}, confirmed_nonce: Some(0) }
[...] [date] DEBUG bridge Asking Millau node about its state
[...] [date] DEBUG bridge Received state from Millau node: ClientState { best_self: HeaderId(1593, 0xacac***), best_finalized_self: HeaderId(1590, 0x0be81d...), best_finalized_peer_at_best_self: HeaderId(0, 0xdcdd89...) }

To send a message see the "How to send a message" section.

Full Network Docker Compose Setup

For a more sophisticated deployment which includes bidirectional header sync, message passing, monitoring dashboards, etc. see the Deployments README.

You should note that you can find images for all the bridge components published on Docker Hub.

To run a Rialto node for example, you can use the following command:

docker run -p 30333:30333 -p 9933:9933 -p 9944:9944 \
  -it paritytech/rialto-bridge-node --dev --tmp \
  --rpc-cors=all --unsafe-rpc-external --unsafe-ws-external

How to send a message

In this section we'll show you how to quickly send a bridge message, if you want to interact with and test the bridge see more details in send message

# In `parity-bridges-common` folder
./scripts/send-message-from-millau-rialto.sh remark

After sending a message you will see the following logs showing a message was successfully sent:

INFO bridge Sending message to Rialto. Size: 286. Dispatch weight: 1038000. Fee: 275,002,568
INFO bridge Signed Millau Call: 0x7904...
TRACE bridge Sent transaction to Millau node: 0x5e68...

Community

Main hangout for the community is Element (formerly Riot). Element is a chat server like, for example, Discord. Most discussions around Polkadot and Substrate happen in various Element "rooms" (channels). So, joining Element might be a good idea, anyway.

If you are interested in information exchange and development of Polkadot related bridges please feel free to join the Polkadot Bridges Element channel.

The Substrate Technical Element channel is most suited for discussions regarding Substrate itself.

Comments
  • Add Front-end docker

    Add Front-end docker

    TODO:

    • [x] This needs first https://github.com/paritytech/bridge-ui/pull/23 and https://github.com/paritytech/bridge-ui/pull/22
    • [x] And then we need to change/tweak the ENV/ARG to make sure the UI connects to the local docker nodes.
    P-Devops 
    opened by Tbaut 18
  • Move CI from GitHub Actions to GitLab

    Move CI from GitHub Actions to GitLab

    New CI.

    Pipeline types

    Nightly test pipeline

    Is triggered by a CI image rebuild at 6 AM CET. It gets a new Rust nightly and runs all types of stable and nightly tests. Doesn't run builds. Looks like this: image (but upstream will be the scripts project)

    Nightly build pipeline

    Is triggered by a schedule at 1 AM CET. It runs all the regular tests and then builds and publishing. Looks like this: image

    New tag pipeline

    Is triggered by a push of a new git tag. Similar to the Nightly build pipeline.

    Master pipeline

    Is triggered by a new commit in the master branch. Looks like this: image

    PR pipeline

    Is triggered by a new commit in any PR. For now similar to the Master pipeline.

    Web pipeline

    Is triggered by Run Pipeline button. Depends on which branch is set there, generally similar to PR or Master pipeline.

    P-Devops 
    opened by TriplEight 11
  • Finality Verifier Pallet

    Finality Verifier Pallet

    The current Substrate header sync pallet is susceptible to several different attacks. These attacks involve being able to write to storage indefinitely (like with the case of #454), or requiring potentially unbounded iteration in order to make progress (like in the case of #367).

    To works around these issues the following is being proposed. Instead of accepting headers and finality proofs directly into the existing pallet we should instead go through a "middleware" pallet. The role of this new pallet will be to verify finality and ancestry proofs for incoming headers, and if successful write the headers to the base Substrate pallet.

    This essentially flips the flow of the current pallet, in which we first import headers and later verify if they're actually part of the canonical chain (through the finality proof verification). By verifying finality first and writing headers later we can ensure that we only write "known good" headers to the base pallet.

    Additionally, by keeping the base pallet as is (more or less), we are able to ensure that applications building on the bridge can continue to have access to all headers from the canonical chain .

    The proposed API for the pallet is as follows:

    // Ancestry proof should container headers from (`last_finalized_header`, `finalized_header`]
    fn submit_finality_proof(justification: GrandpaJustification, ancestry_proof: Vec<Header>);
    

    If this call is successful, the headers from the ancestry proof would be written to the base pallet.

    Optionally, if the proofs don't fit into a single block, we can also have a "chunk" based API to allow relayers to submit proofs across several blocks:

    // The `merkle_root` would be the root of a trie containing all the headers you plan to submit
    fn signal_long_proof(deposit: T::Balance, finality_proof: GrandpaJustification, merkle_root: Vec<u8>);
    
    // The chunk submitted here would be the leaves of the merkle trie
    fn submit_chunk(ancestry_chunk: Vec<Header>);
    

    After a call to signal_long_proof() you'd be able to call submit_chunk() as many times as required to complete the proof. We may want to restrict others from submitting proofs during this period, and add some timeouts in case we don't receive a chunk, or if we don't complete a proof, in a certain amount of time. In the case where we timeout, or the proof is invalid, the submitter will lose their deposit.

    A-feat P-Runtime 
    opened by HCastano 11
  • Update submit finality proof weight formula

    Update submit finality proof weight formula

    on top of #979 => draft the only new commit is https://github.com/paritytech/parity-bridges-common/commit/6ed064144c9c9511938b0fc21166bf9bb7d405d7

    Final results (weighs) are on the "common prefix" sheet of this document. There are 4 weight-related parameters in this document - p and v + c and f. The f parameter represents the number of forks in the precommits ancestry and the c is the number of shared (among all forks) ancestors. In the end, I've dropped these parameters from the weight formula - we may try to re-add them later, but for now I haven't found a way to use them (spent 1.5 days on that) + seems that they are not affecting total weight that much.

    From results, you may see that the largest (in percents) error (computed vs actual weight) is 11.5% for the case (p=2, v=16). The larger are parameters (p and v), the less is error. On real chains, we'll be seeing justifications that are near to (p=512, v=2) and (p=1024, v=2) where error is ~1.4%.

    I've computed approximate cost of submit_finality_proof on the Kusama chain where (I hope) we'll see ~1024*2/3+1 precommits and near to 2 ancestors - the cost is near to 0.37 KSM, given weights that I've computed on my laptop. Initially I've got ~0.2, but I haven't considered forks back then. Hopefully, after updating weights on reference machine, we'll have better results.

    I've also left JustificationGeneratorParams::common_ancestors field to show what the c parameter represents in the document above. Could remove it if required, but I feel that we may use it later for optimizing the formula (to refund submitter if some forks had duplicate headers => AncestryChain::is_descendant was executed faster).

    P-Runtime PR-audit-needed 
    opened by svyatonik 10
  • Weights for `pallet-bridge-grandpa`

    Weights for `pallet-bridge-grandpa`

    This PR sets up our benchmarking framework for pallet-bridge-grandpa. There is still some future work to be done, like running benches with more authorities, but in general I think this is at a point where it can be merged and we can improve upon it in follow up PRs.

    I have a description of my early findings in this comment. I would recommend having this alongside benchmarking.rs while reviewing this PR.

    P-Runtime 
    opened by HCastano 10
  • Prevent concurrent finality/messages submissions via `SignedExtension`

    Prevent concurrent finality/messages submissions via `SignedExtension`

    Related: #983

    Currently we allow anyone to submit a finality proof or deliver valid messages. This is nice, but in real world in case many relayers (per-lane) are present they are going to simply compete with each other to submit the information.

    The winning relayer is going to be rewarded (also the submission costs might be returned PaysFee::No (see #983)), but loosing relayers are going to pay costs for failed dispatch.

    To alleviate the issue we can easily prevent failing transactions from being included by adding another SignedExtension which would invalidate such submission transactions that are known to fail. We have to be careful to avoid too many storage reads (to prevent block authors DoS via validity checking), but overall it seems it should be possible.

    A-feat P-Runtime 
    opened by tomusdrw 9
  • Prepare actual weights for runtime transactions

    Prepare actual weights for runtime transactions

    In #69 I had to switch to Substrate master to be able to use GrandpaJustification::decode_and_verify_finalizes() from builtin. It turned out that the main change is that now all runtime calls (transactions) require annotated weights. So before considering PoA -> Substrate headers sync to be finished, we need to prepare actual weights for all calls. I've started to do that, but it's a quite complicated process + as some runtime changes are still planned (I'm mostly talking about #62 and maybe #38 ), it isn't the right moment to do that now.

    The main problem is that we can't compute weight of header import call by looking at the header itself. The main source of uncertainty is finality computations - we may need to read arbitrary number of headers (and write some updates after that) if validators haven't been finalized headers for too long. At the beginning I had some ideas that we'll cache computation results in the storage, so we don't need to recompute it on every block, but iirc I've stuck with something - so most probably this is impossible, and we'll need storage reads anyway :/ But may worth looking at it again.

    That said, most probably, we need to introduce some amortized weight for import calls. This may also affect some params/default constants in relay, because weight may be too big to fit into 'normal' block.

    So I'm proposing to mark all methods with #[weight=0] (or whatever you suggest) and change that afterwards.

    A-feat P-Runtime 
    opened by svyatonik 9
  • Make Substrate Pallet Instantiable

    Make Substrate Pallet Instantiable

    Hello, I'm making the bridge pallet instantiable per the requirements of my project to have multiple bridges connected and found the issue https://github.com/paritytech/parity-bridges-common/issues/360 which is open and covers exactly that.

    WIP - need to fix tests.

    A-feat P-Runtime 
    opened by MaciejBaj 8
  • Overhaul releasing process

    Overhaul releasing process

    -> Due to the long image builds we have to use the self-hosted runner. -> A self-hosted runner can't buildah Have to update to the newer version of the deprecated official docker action. This is, in fact, not bad. Now the action uses BuildX. -> BuildX has an ugly bug

    Overhaul of publishing

    • remove multistage build as it is a CI recursion
    • upgrade to ubuntu:20.04 base
    • CI: move to use buildah actions
    • chore
    P-Devops 
    opened by TriplEight 8
  • Initial Substrate Header Chain Implementation

    Initial Substrate Header Chain Implementation

    This PR introduces the first version of the Substrate bridge pallet. The idea behind this pallet is that it mainly only cares about two things:

    1. That a header is valid.
    2. That a header has been finalized.

    Using this simple interface it can accept headers from a different Substrate chain and keep track of their status. Higher level applications that build on this will mainly only care about headers that have been finalized.

    Some notable things which will be addressed in follow-ups include:

    • Actually checking finality proofs
    • Runtime APIs for querying the state of the chain
    • Pallet configuration parameters (right now we only do config through the chain_spec)
    • Any sort of fork management
    • A deeper storage API (right now its a very thin wrapper)

    I removed some of the code to wire the pallet into the runtime because of the changes introduced in #341. I'll add that back in later.

    Note, that while this PR looks kinda big, I think a decent chunk of the line count comes from the Verifier tests. These are kinda verbose at the moment, but that's also something that can be improved in the future.

    opened by HCastano 8
  • Merkle Mountain Range for efficient Grandpa ancestry proofs

    Merkle Mountain Range for efficient Grandpa ancestry proofs

    Imagine a chain like this:

    ...   X           // Best finalized chain known by the (on-chain) light client
    ... 4 5 6 7 8 9 A // Block number 5-10
    ... - F - - - - F // F means the header is finalized (we import with justification)
    

    Why?

    Light client implementations have a hard time to decide if headers 6-9 should be imported if header 10 (0xA) has not yet been seen.

    Our current approach (Solidity for PoA, most likely upcoming substrate light client) is to simply import headers as we see them, verifying only their parent-hash to make sure they extend the right (finalized) fork and mark them finalized at the very end, when we see justification data for block number 10.

    This approach has some drawbacks:

    • tracking multiple forks is expensive, but necessary (since we don't even verify block authorship signatures)
    • some pruning is required (or marking individual blocks as finalized)
    • each and every block has to be imported, and they all need to be imported sequentially, which is expensive.
    • super-fast chains are going to be hard to bridge due to amount of required header-chain data

    For many chains, where transactions costs (both computation and storage) are high, this approach might be suboptimal.

    How?

    The idea is to be able to import header number 10 directly, without requiring 6-9 to be imported first (or at all). While we could in theory simply accept header 10 if it's signed correctly by current validator set, we might run into two issues:

    1. For some applications data from headers 6-9 (or at least header hashes) might be required (for instance to prove transaction extistence)
    2. Block 10 might be built on a different fork than block 5, so we must verify block 10's ancestry.

    In Frame-based substrate runtimes, frame_system pallet is actually storing block hashes of recent blocks, so it might be possible to use this data to prove ancestry (you simply present a storage proof at block 10), but this data is pruned (MaxBlockHashes) and while in theory if finality data was on-chain as well, we could extend this period, there are more efficient ways of doing it, namely: https://github.com/paritytech/substrate/issues/2053

    Or even better Merkle Mountain Ranges: https://github.com/paritytech/substrate/issues/3722

    Example 1: Merkle Mountain Ranges (MMR) #2053 , https://github.com/mimblewimble/grin/blob/master/doc/mmr.md For many kinds of auxiliary blockchain protocols, it's important to be able to prove that some ancient block header is an ancestor of the finalized chain head. MMRs provide a good way of doing that. We want to write a runtime module to keep track of the peaks (roots) of a bunch of different merkle tries - there will be log2(N) of these for N blocks (and N trie nodes in total). You can add to the MMR with only the peaks, and prove ancestry if you have all the nodes. We'd want full nodes to keep track of all of the MMR nodes by keeping them in offchain storage. However, if even one block in the chain is not executed, it is possible to end up in a situation where ancestry can no longer be proven.

    Details

    We should extend frame_system to store MMR peaks and use Indexing API to write all the nodes to the Offchain Database. Every node with indexing enabled will then be able to construct MMR proofs of ancestry that can be efficiently verified against the on-chain data. MMR offchain data can also be re-constructed from the header chain (perhaps within an Offchain Worker), but reconstructing the structure on-demand is not feasible (it's a O(n) process).

    So to circle back to our example, the light client could import header 10 directly, but would require an extra proof data:

    1. Storage proof of MMR peaks
    2. MMR proof of 5 being an ancestor of 10.

    Also any application built on top of the header chain could verify that say header 7 is part of the chain by providing exactly the same proofs.

    This issue needs to be implemented in Substrate repo.

    A-feat P-Runtime 
    opened by tomusdrw 8
  • Update project level docs

    Update project level docs

    part of #1406

    WIP, but feel free to read and make any suggestions. TODOs:

    • [x] Messages Relay Sequence Diagram;
    • [x] Complex Relay Sequence Diagram;
    • [x] Polkadot <> Kusama (also Rococo <> Wococo) bridge documentation (architecture, problems and solutions, rewards + all non-essential bridge components);
    • [x] detailed deployments documentation.
    opened by svyatonik 0
  • Optimize GRANDPA justifications before submitting them to the pallet-bridge-grandpa

    Optimize GRANDPA justifications before submitting them to the pallet-bridge-grandpa

    Noticed one strange thing when testing our relay with Kusama/Polkadot - within the same session, GRANDPA justifications received from the node contain different number of signatures. E.g. given session with 1000 validators in Kusama, on justification has 672 precommits, other has 674 and the next one has 676. Since on-chain signature verification is expensive, it'd be better for relayers to "optimize" justifications before submitting them. Iiuc in this (1000 validators) case we only need 667 signatures, so extra ~10 signatures may be removed before sending it to the pallet.

    Good First Issue P-Relay 
    opened by svyatonik 1
  • BridgeHub Kusama<>Polkadot parachains story

    BridgeHub Kusama<>Polkadot parachains story

    TODO: some short intro

    Base

    Cumulus github with BridgeHub runtimes: https://github.com/paritytech/cumulus/tree/master/parachains/runtimes/bridge-hubs

    Local setup

    TODO: update zombienet

    Live environment

    Deployment

    With help of DevOps guys https://github.com/paritytech/devops/issues/2196

    Parachain collators + validators

    BridgeHubKusama: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fkusama-bridge-hub-rpc.polkadot.io/#/explorer

    Logs: loki Telemetry: here Monitoring Grafana

    Kusama: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fkusama-rpc.polkadot.io#/explorer

    BridgeHubPolkadot: TODO:

    Logs: here Telemetry: here Monitoring: here

    _Polkadot: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Frpc.polkadot.io#/explorer

    Relayers

    Logs: [here](TODO:grafana logs)

    Monitoring/alerts

    TODO: based on https://github.com/paritytech/parity-bridges-common/issues/1715

    Relayer accounts

    BridgeHubKusama: TODO: BridgeHubPolkadot: TODO:

    Initialize bridges

    Init bridge: Kusama -> Polkadot

    1. Generate initialize init-bridge call as hex-encoded-data:
    TODO: cmd
    
    1. Check hex-encoded-data here: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fpolkadot-bridge-hub-rpc.polkadot.io#/extrinsics/decode

    2. Simulate governance as submit extrinsic XCM::Transact with hex-encoded-data for para_id=TODO from Polkadot relay node

    Init bridge: Polkadot -> Kusama

    1. Generate initialize init-bridge call as hex-encoded-data:
    TODO: cmd
    
    1. Check hex-encoded-data here: https://polkadot.js.org/apps/?rpc=wss%3A%2F%2Fkusama-bridge-hub-rpc.polkadot.io#/extrinsics/decode

    2. Simulate governance as submit extrinsic XCM::Transact with hex-encoded-data for para_id=TODO from Kusama relay node

    Open HRMP channels

    TODO:

    Asset transfer use-case

    TODO: create/link issues

    Other use-cases

    TODO:


    Issues/TODOs:

    • [ ] ///
    opened by bkontur 1
  • Metric to see pending relayer reward

    Metric to see pending relayer reward

    I.e. the value from the pallet_bridge_relayers::RelayerRewards map. We can use it e.g. to ensure that the balance + pending_reward stays the same at the target chain.

    Good First Issue P-Relay 
    opened by svyatonik 0
  • Tune MAX_UNREWARDED_RELAYERS_IN_CONFIRMATION_TX constant on production chains

    Tune MAX_UNREWARDED_RELAYERS_IN_CONFIRMATION_TX constant on production chains

    Currently it is set to 8192, which is enough, but nothing stops us from increasing it further because it doesn't affect performance in new rewards scheme. We may also thing of increasing MAX_UNCONFIRMED_MESSAGES_IN_CONFIRMATION_TX - its effect on performance also must be lower than before.

    This must be done after we'll see final weights for (at least R/W) bridge hubs.

    A-chores P-Runtime 
    opened by svyatonik 0
Releases(v0.9-rc4)
Owner
Parity Technologies
Solutions for a trust-free world
Parity Technologies
The Parity Bitcoin client

The Parity Bitcoin client. Gitter Installing from source Installing the snap Running tests Going online Importing bitcoind database Command line inter

Parity Technologies 714 Dec 21, 2022
The Phala Network Blockchain, pRuntime and the bridge.

Phala Blockchain Phala Network is a TEE-Blockchain hybrid architecture implementing Confidential Contract. This repo includes: node/: the main blockch

Phala Network 313 Dec 20, 2022
Utility to run a regtest bitcoind process, useful in integration testing environment

Bitcoind Utility to run a regtest bitcoind process, useful in integration testing environment. use bitcoincore_rpc::RpcApi; let bitcoind = bitcoind::B

Riccardo Casatta 14 Jan 3, 2023
Collection of Key Derivation Functions written in pure Rust

RustCrypto: Key Derivation Functions Collection of Key Derivation Functions (KDF) written in pure Rust. Supported Algorithms Algorithm Crate Crates.io

Rust Crypto 44 Dec 25, 2022
Parity-Bridge — Bridge between any two ethereum-based networks

Deprecated Bridges This repo is deprecated. Originally it contained the ETH <> ETH-PoA bridge (see tumski tag). Later it was repurposed for ETH-PoA <>

Parity Technologies 314 Nov 25, 2022
Parity-Bridge

Deprecated Bridges This repo is deprecated. Originally it contained the ETH <> ETH-PoA bridge (see tumski tag). Later it was repurposed for ETH-PoA <>

Parity Technologies 314 Nov 25, 2022
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
A collection of semi-useful tools made for GNU/Linux

DECTOOLS A collection of semi-useful tools made for GNU/Linux. Some may work on macOS, though functionality isn't a priority. Depenencies: python, bas

Decator 3 Jun 8, 2022
This crate bridges between gstreamer and tracing ecosystems.

This crate provides a bridge between gstreamer and the tracing ecosystem. The goal is to allow Rust applications utilizing GStreamer to better integra

Standard Cognition OSS 17 Jun 7, 2022
Rust-verification-tools - RVT is a collection of tools/libraries to support both static and dynamic verification of Rust programs.

Rust verification tools This is a collection of tools/libraries to support both static and dynamic verification of Rust programs. We see static verifi

null 253 Dec 31, 2022
Really useful hacking tools.

A Remake of Hax in Rust™ - Port-scanner(Powered by nmap) ✅ - URL-Masker ✅ - Phisher(Powered by ZPhisher) ☑️ (WIP ?? ) For Linux Enthusiasts: You need

skyline 4 May 13, 2023
Structopt derived ethers-rs types, useful for building Ethereum CLIs

ethers-structopt Provides ethers-compatible Structopt derives, useful for building Ethereum CLIs. Contributing Pull requests are welcome. For major ch

Georgios Konstantopoulos 6 Dec 27, 2022
"Crates for Cheese" is a Rust collection library of those crates I consider a useful "extended standard".

cfc The purpose of this library is to provide a minimal list of currated crates which enhance the std library. In addition, most or all crates in this

null 0 Dec 23, 2021
Rust Util Collection, a simple and friendly error-chain, with many useful utils as an addition.

RUC Rust Util Collection, a simple and friendly error-chain, with many useful utils as an addition. The painful experience of using error-chain gave b

漢 6 Mar 27, 2022
This library contains collection of all sorts of useful information for every country.

Keshvar This library contains collection of all sorts of useful information for every country. Package | Documentation | Repository Demo use keshvar::

Pouriya 11 Apr 2, 2023
A collection of mapping suites and useful algorithms, implemented in pure Rust

Unstable API Note that this crate is in early development, breaking API changes are to be expected. Usage Add this to your Cargo.toml: [dependencies]

Emily Matheys 7 Aug 31, 2023
The Parity Bitcoin client

The Parity Bitcoin client. Gitter Installing from source Installing the snap Running tests Going online Importing bitcoind database Command line inter

Parity Technologies 714 Dec 21, 2022
The Parity Bitcoin client

The Parity Bitcoin client. Gitter Installing from source Installing the snap Running tests Going online Importing bitcoind database Command line inter

Parity Technologies 714 Dec 21, 2022
Parity Shasper beacon chain implementation using the Substrate framework.

Parity Shasper This is an implementation of Serenity beacon chain by Parity Technologies. To learn more about Serenity and Ethereum's sharding plan, s

Parity Technologies 198 Nov 20, 2022
A GUI for NordVPN on Linux that maintains feature parity with the official clients, written with Rust and GTK.

Viking for NordVPN This project aims to provide a fully usable and feature-complete graphical interface for NordVPN on Linux. While it attempts to clo

Jacob Birkett 2 Oct 23, 2022