Subspace Monorepo

Overview

Subspace Network Monorepo

Rust TypeScript

This is a mono repository for Subspace Network implementation, primarily containing Subspace node/client using Substrate framework and farmer app implementations.

Repository structure

The structure of this repository is the following:

  • crates contains Subspace-specific Rust crates used to build node and farmer, most are following Substrate naming conventions
    • subspace-node is an implementation of the node for Subspace protocol
    • subspace-farmer is a CLI farmer app
  • substrate contains modified copies of Substrate's crates that we use for testing

How to run

This is a monorepo with multiple binaries and the workflow is typical for Rust projects:

  • cargo run --release --bin subspace-node -- --dev --tmp to run a node
  • cargo run --release --bin subspace-farmer -- farm to start farming

NOTE: You need to have nightly version of Rust toolchain with wasm32-unknown-unknown target available or else you'll get a compilation error.

You can find readme files in corresponding crates for requirements, multi-node setup and other details.

Comments
  • Farmer crash

    Farmer crash

    env

    2022-06-08 17:48:23 [PrimaryChain] 💻 Operating system: linux
    2022-06-08 17:48:23 [PrimaryChain] 💻 CPU architecture: x86_64
    2022-06-08 17:48:23 [PrimaryChain] 💻 Target environment: gnu
    2022-06-08 17:48:23 [PrimaryChain] 💻 CPU: AMD Ryzen Threadripper 3960X 24-Core Processor
    2022-06-08 17:48:23 [PrimaryChain] 💻 CPU cores: 24
    2022-06-08 17:48:23 [PrimaryChain] 💻 Memory: 257670MB
    2022-06-08 17:48:23 [PrimaryChain] 💻 Kernel: 5.4.0-104-generic
    2022-06-08 17:48:23 [PrimaryChain] 💻 Linux distribution: Ubuntu 20.04.4 LTS
    2022-06-08 17:48:23 [PrimaryChain] 💻 Virtual machine: no
    

    log ERROR subspace_farmer::plotting: Failed to write encoded pieces error=Corruption: block checksum mismatch: stored = 213466664, computed = 3211341933, type = 1

    bug 
    opened by hairtail 24
  • Domain transaction pool router

    Domain transaction pool router

    At the moment, we do not have a way for the executors and relayers to submit extrinsics bound to domains local executor may not be running. This PR aims to achieve that using Primary chain network. Domain's extrinsinc is wrapped as Primary chain's Unsigned extrinsic and submitted to the network. And in each executor, transaction router, routes this unwrapped domain extrinsinc if the local client runs the domain tx_pool. The Unsigned extrinsic pre check will fail so there will never make it into the primary block. Relayer now creates domain extrinsic and subimts them on primary network.

    As for the TxPool wrappper, I have updated submit_at since that seems the only one we need to worry about as it is called from the network layer.

    Closes: #935

    Code contributor checklist:

    opened by vedhavyas 15
  • Farmer uses a lot of RAM

    Farmer uses a lot of RAM

    Node is looped on one block and then fills all the RAM staying on the same block.

    https://user-images.githubusercontent.com/107106580/172573363-95f40771-8daf-4818-9d5a-0057334a072b.mp4

    bug 
    opened by Neverhood13 15
  • Fraud proof verification on the primary node

    Fraud proof verification on the primary node

    This PR aims to implement the fraud proof verification on the primary node, which is done by introducing a fraud proof externality extension so that it can be used in the runtime. There are also a few refactorings to the FraudProof struct so that it contains all the necessary information for the primary node to verify it.

    There are still a few TODOs that may not necessarily have to be done in this PR:

    • [ ] Test ProofVerifier
    • [ ] Fetch the secondary runtime code from primary chain.
    opened by liuchengxu 14
  • Stack-overflow during initial re-commit after sync is done

    Stack-overflow during initial re-commit after sync is done

    When plotting larger plots, like 18700G (ie. 160 plots) the farmer crashes with a stack-overflow during the initial re-commit after the sync is done.

    At this point you see these messages, one for each plot:

    2022-06-15T19:10:46.002494Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.002519Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.002588Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.002640Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.003009Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.004324Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.004353Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.004395Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.004397Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    2022-06-15T19:10:46.004455Z  INFO subspace_farmer::farming: Salt updated, recommitting in background new_salt=79f25142c329a0d0
    

    After that it continues to archive segments, until the crash:

    ^[[2m2022-06-15T18:33:16.314271Z^[[0m ^[[32m INFO^[[0m ^[[2msubspace_farmer::archiving^[[0m^[[2m:^[[0m Plotted segment ^[[3msegment_index^[[0m^[[2m=^[[0m109991
    ^[[2m2022-06-15T18:35:29.977825Z^[[0m ^[[32m INFO^[[0m ^[[2msubspace_farmer::archiving^[[0m^[[2m:^[[0m Plotted segment ^[[3msegment_index^[[0m^[[2m=^[[0m109992
    ^[[2m2022-06-15T18:36:28.784317Z^[[0m ^[[32m INFO^[[0m ^[[2msubspace_farmer::archiving^[[0m^[[2m:^[[0m Plotted segment ^[[3msegment_index^[[0m^[[2m=^[[0m109993
    ^[[2m2022-06-15T18:37:23.277314Z^[[0m ^[[32m INFO^[[0m ^[[2msubspace_farmer::archiving^[[0m^[[2m:^[[0m Plotted segment ^[[3msegment_index^[[0m^[[2m=^[[0m109994
    ^[[2m2022-06-15T18:38:01.182120Z^[[0m ^[[32m INFO^[[0m ^[[2msubspace_farmer::archiving^[[0m^[[2m:^[[0m Plotted segment ^[[3msegment_index^[[0m^[[2m=^[[0m109995
    
    thread '<unknown>' has overflowed its stack
    fatal runtime error: stack overflow
    

    Commitment DBs for salt 79f25142c329a0d0 have around 350M in size after the crash, compared to 450M for a finished process. So none of the re-commits did actually finish.

    bug 
    opened by madMAx43v3r 13
  • Block prepare storage changes error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruct ion executed

    Block prepare storage changes error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruct ion executed

    2022-04-19 15:45:47 🔃 Updating next salt on eon 5 slot 1650343790
    
    2022-04-19 15:45:47 panicked at 'Storage root must match that calculated.', /root/.cargo/git/checkouts/substrate-7e08433d4c370a21/e6def65/frame/executive/src/lib.rs:488:9
    
    
    2022-04-19 15:45:47 Block prepare storage changes error: Error at calling runtime api: Execution failed: Execution aborted due to trap: wasm trap: wasm 'unreachable' instruction executed
    WASM backtrace:
    
        0: 0x1b76 - <unknown>!rust_begin_unwind
        1:  0xf9d - <unknown>!core::panicking::panic_fmt::h9cff3f1f9c956d6b
        2: 0x982e7 - <unknown>!Core_execute_block
    
    2022-04-19 15:45:47 💔 Error importing block 0x083ba5a65cf858dc1d11959b51db47287ceec2161eaa11b3ab7d1f447d92317a: consensus error: Import failed: Error at calling runtime api:
     Execution failed: Execution aborted due to trap: wasm trap: wasm `unreachable` instruction executed                                                                          
    WASM backtrace:
                                                                                                                                                                                      
        0: 0x1b76 - <unknown>!rust_begin_unwind
        1:  0xf9d - <unknown>!core::panicking::panic_fmt::h9cff3f1f9c956d6b
        2: 0x982e7 - <unknown>!Core_execute_block
    
    
    bug 
    opened by SilversterSunset 13
  • Node is banned by other node during sync

    Node is banned by other node during sync

    We have observed a situation when one node bans other node that tries to sync large amounts of data from it.

    For instance on testnet where we had 2 nodes and a third node tried to sync ~200G of data.

    We need to figure out why this happens and fix it or disable corresponding logic as a workaround (see https://github.com/paritytech/substrate/issues/10130).

    bug 
    opened by nazar-pc 11
  • [Document] More detailed instructions on build from source

    [Document] More detailed instructions on build from source

    So many community miners with different cpu arch, os arch, os version should be considered. The easiest way to solve this problem is a document to help miners build from source by themselves.

    documentation 
    opened by hairtail 11
  • SubstrateCli(Input(

    SubstrateCli(Input("Error parsing spec file: missing field `executionChainSpec` at line 114 column 1"))

    На win10 возникает ошибка SubstrateCli(Input("Error parsing spec file: missing field executionChainSpec at line 114 column 1")). Команда не помогает .\subspace-node.exe purge-chain --chain testnet-compiled image

    opened by Sleep96 10
  • gemini-1b-2022-jun-18 silently exists on Windows without doing anything

    gemini-1b-2022-jun-18 silently exists on Windows without doing anything

    Seems to be Windows-specific.

    Two threads on Discord: https://discord.com/channels/864285291518361610/864285291518361613/987704674838782062 https://discord.com/channels/864285291518361610/864285291518361613/987596219658436678

    bug 
    opened by nazar-pc 9
  • Extra check for single disk plot shutdown

    Extra check for single disk plot shutdown

    I discovered a deadlock on drop of single disk plot. So this pr fixes that with addition of shutdown check on each slot.

    Code contributor checklist:

    opened by i1i1 8
  • Unexpected errors with an unknown file

    Unexpected errors with an unknown file

    Seems like when trying to start a farmer with a plot size that is much larger than the free space on the drive, errors appear involving an unknown file. Farmer's command line:

    subspace-farmer --base-path /mnt/subspace/subspace-farm --farm path=/mnt/subspace/subspace-farm,size=1000G farm --reward-address st...
    

    There was about 100 GB free on the drive.

    On first run, the following error occurs:

    2022-12-20T14:45:50.419396Z  INFO subspace_farmer::utils: Increase file limit from soft to hard (limit is 25000)
    2022-12-20T14:45:50.419618Z  INFO subspace_farmer::commands::farm Connecting to node RPC at ws://127.0.0.1:9944
    2022-12-20T14:45:50.420908Z  INFO subspace_farmer::commands::farm: Record cache DB configured. record_cache_db_path="/mnt/subspace/subspace-farm/records_cache_db" record_cache_size=65536
    2022-12-20T14:45:50.421384Z  INFO subspace_networking::behavior::custom_record_store: New record cache initialized.
    2022-12-20T14:45:50.421558Z  INFO subspace_networking::create: DSN instance configured. allow_non_global_addresses_in_dht=true peer_id=12D3KooWGgwJcm89XLwYvShVHGBzQh5MEZj9vTN3VBY12UUDm8t8
    2022-12-20T14:45:50.421922Z  INFO subspace_farmer::commands::farm: Connecting to node RPC at ws://127.0.0.1:9944
    Error: I/O error: No such file or directory (os error 2)
    Caused by:
        No such file or directory (os error 2)
    

    All subsequent attempts result in a different error:

    2022-12-20T14:46:00.669260Z  INFO subspace_farmer::utils: Increase file limit from soft to hard (limit is 25000)
    2022-12-20T14:46:00.669511Z  INFO subspace_farmer::commands::farm: Connecting to node RPC at ws://127.0.0.1:9944
    2022-12-20T14:46:00.670722Z  INFO subspace_farmer::commands::farm: Record cache DB configured. record_cache_db_path="/mnt/subspace/subspace-farm/records_cache_db" record_cache_size=65536
    2022-12-20T14:46:00.671097Z  INFO subspace_networking::behavior::custom_record_store: New record cache initialized.
    2022-12-20T14:46:00.671286Z  INFO subspace_networking::create: DSN instance configured. allow_non_global_addresses_in_dht=true peer_id=12D3KooWGgwJcm89XLwYvShVHGBzQh5MEZj9vTN3VBY12UUDm8t8
    2022-12-20T14:46:00.671647Z  INFO subspace_farmer::commands::farm: Connecting to node RPC at ws://127.0.0.1:9944
    Error: I/O error: File exists (os error 17)
    Caused by:
        File exists (os error 17)
    
    bug good first issue farmer 
    opened by SilversterSunset 0
  • Fork aware primary block processing

    Fork aware primary block processing

    This PR aims to handle the primary block forks properly. Basically, one incoming primary block import notification could mean multiple primary blocks we need to process in case the primary chain forks.

    Tested in a local network with multiple farmers, now the domain forks whenever the primary chain forks.

    2022-12-23 00:41:36.915  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #555,0x7e2d…c18a to #555,0x5e3e…d1f9, common ancestor #554,0x593d…9d31    
    2022-12-23 00:41:38.504  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #557,0xc0f0…68e2 to #557,0x94b8…d90e, common ancestor #556,0x787e…3d28    
    2022-12-23 00:41:38.506  INFO tokio-runtime-worker sc_informant: [CoreDomain] ♻️  Reorg on #557,0xe471…8a27 to #557,0xa594…fd25, common ancestor #556,0x77ab…8c41    
    2022-12-23 00:41:38.508  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #557,0xd008…fa6c to #557,0x11be…01dd, common ancestor #556,0x59a1…0cca    
    2022-12-23 00:41:44.321  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #560,0x1afc…6ad7 to #560,0xbbcd…1f74, common ancestor #559,0xfd68…a546    
    2022-12-23 00:41:44.360  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #560,0x0976…b034 to #560,0xe161…3528, common ancestor #559,0x4659…c933    
    2022-12-23 00:41:44.375  INFO tokio-runtime-worker sc_informant: [CoreDomain] ♻️  Reorg on #560,0xa8b0…3a23 to #560,0x7656…f0e1, common ancestor #559,0x488b…4a5f    
    2022-12-23 00:41:44.649  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #560,0xbbcd…1f74 to #561,0x9668…1737, common ancestor #559,0xfd68…a546    
    2022-12-23 00:41:44.679  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #560,0xe161…3528 to #561,0xd291…3f15, common ancestor #559,0x4659…c933    
    2022-12-23 00:41:48.664  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #562,0x1846…f0bf to #562,0x756d…d3cb, common ancestor #561,0x9668…1737    
    2022-12-23 00:41:48.682  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #562,0x74e9…4617 to #562,0x7d05…5784, common ancestor #561,0xd291…3f15    
    2022-12-23 00:41:48.693  INFO tokio-runtime-worker sc_informant: [CoreDomain] ♻️  Reorg on #562,0x11e4…e3ab to #562,0x5cda…248d, common ancestor #561,0xfa16…214a    
    2022-12-23 00:41:48.996  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #563,0x41db…089a to #563,0x951a…e429, common ancestor #561,0x9668…1737    
    2022-12-23 00:41:49.026  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #563,0xc445…8871 to #563,0xf00b…f148, common ancestor #561,0xd291…3f15    
    2022-12-23 00:42:06.215  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #575,0xdc48…bfe1 to #575,0x0497…ea24, common ancestor #574,0x4785…4fb5    
    2022-12-23 00:42:06.225  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #575,0x72cf…6ce7 to #575,0xcb82…af8c, common ancestor #574,0x77a7…c29a    
    2022-12-23 00:42:06.227  INFO tokio-runtime-worker sc_informant: [CoreDomain] ♻️  Reorg on #575,0x7bd5…251a to #575,0x1db0…5381, common ancestor #574,0xb45e…057b    
    2022-12-23 00:42:20.069  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #584,0x060d…03a9 to #584,0x8871…f48a, common ancestor #583,0xb522…4b91    
    2022-12-23 00:42:20.087  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #584,0x8d1d…8a5c to #584,0xb41e…21d0, common ancestor #583,0x7fa5…d4f1    
    2022-12-23 00:42:20.405  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #586,0x1e04…5650 to #586,0xa8d9…b9a8, common ancestor #585,0x9d06…1dd7    
    2022-12-23 00:42:20.408  INFO tokio-runtime-worker sc_informant: [CoreDomain] ♻️  Reorg on #586,0x4da2…3115 to #586,0xcd18…eaa0, common ancestor #585,0x3abf…49f6    
    2022-12-23 00:42:20.408  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #586,0xcc21…7b58 to #586,0x3ca7…d3fa, common ancestor #585,0xacf2…7f6d    
    2022-12-23 00:42:26.066  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #590,0xf57b…f879 to #590,0xe23c…51af, common ancestor #589,0x79fd…e6bd    
    2022-12-23 00:42:26.079  INFO tokio-runtime-worker sc_informant: [CoreDomain] ♻️  Reorg on #590,0xc052…32c5 to #590,0x67df…48c0, common ancestor #589,0x9b5c…d083    
    2022-12-23 00:42:26.080  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #590,0x6b47…3b77 to #590,0xed6b…6da2, common ancestor #589,0xea22…9d03    
    2022-12-23 00:42:27.422  INFO tokio-runtime-worker sc_informant: [PrimaryChain] ♻️  Reorg on #591,0x7385…0415 to #591,0xf609…b6f1, common ancestor #589,0x79fd…e6bd    
    2022-12-23 00:42:27.454  INFO tokio-runtime-worker sc_informant: [CoreDomain] ♻️  Reorg on #591,0xb923…83a8 to #591,0xd6a1…a7b0, common ancestor #589,0x9b5c…d083    
    2022-12-23 00:42:27.473  INFO tokio-runtime-worker sc_informant: [SecondaryChain] ♻️  Reorg on #591,0xe782…e23f to #591,0x879b…c623, common ancestor #589,0xea22…9d03    
    

    Close #1034

    This is not the whole story, I still see some edge cases locally and need more time to test it both locally and on gemini-3b, but the meat is there, I hope it can be reviewed sooner than later.

    Code contributor checklist:

    opened by liuchengxu 0
  • Check if is_global_address_or_dns can be used to make nodes to connect to private IPs

    Check if is_global_address_or_dns can be used to make nodes to connect to private IPs

    It seems that is_global_address_or_dns will allow to circumvent the check for private IPs by having DNS record with private IP, which can be used to make nodes locked out on Hetzner. We need to make sure the check for private IPs is applied in that case as well.

    networking 
    opened by nazar-pc 0
  • Upgrade upstream protocol libp2p to support our use-cases

    Upgrade upstream protocol libp2p to support our use-cases

    This is a tracking issue for upstream libp2p protocol tasks that should be fixed (either by us or by Protocol Labs) for our use cases:

    • [ ] error on provider records republication: https://github.com/libp2p/rust-libp2p/issues/3236
    • [ ] incorrect iterator handling: https://github.com/libp2p/rust-libp2p/blob/1b4624f74a5553b2af081a02bf54bdf4635ca277/protocols/kad/src/jobs.rs#L303
    • [ ] incorrect inbound stream handling: https://github.com/libp2p/rust-libp2p/discussions/3235
    opened by shamil-gadelshin 0
  • Block importing for nodes from DSN.

    Block importing for nodes from DSN.

    We won't have archival nodes in Subspace network long-term and we will synchronize blocks from the DSN instead of archival nodes.

    Tasks:

    • [x] Restore "import" command for node app from DSN v1: #880
    • [ ] Import blocks on node startup from the DSN: combine restored "import" command logic with the "normal" node start.
    • [ ] Enable block importing after interruptions in live node after more than 100 (current confirmation depth) blocks passed. This will most likely require a major refactoring of the upstream substrate framework
    networking node epic 
    opened by shamil-gadelshin 0
  • DSN. Get pieces from the archival storage to the piece cache on the farmer.

    DSN. Get pieces from the archival storage to the piece cache on the farmer.

    Currently, we only populate the piece cache (L2) from the node piece announcements but we need also to populate this cache from the archival storage (L1) retrieval because it means that the piece is missing in the piece cache (L2).

    farmer networking 
    opened by shamil-gadelshin 0
Releases(gemini-3b-2022-dec-19)
Owner
subspace
A free and fair blockchain for all
subspace
Project Serum Rust Monorepo

serum-dex Project Serum Rust Monorepo Website | Discord | Awesome | DEX | TypeScript Program Deployments Program Devnet Mainnet Beta DEX DESVgJVGajEgK

Project Serum 564 Dec 31, 2022
Monorepo of our ahead-of-time implementation of Secure ECMAScript

Ahead-of-time Secure EcmaScript The monorepo contains a set of packages that helps adopt SES in a pre-compiled way. Security Assumptions This project

Dimension 13 Dec 21, 2022
A monorepo containing all the custom components of the Astria network

A monorepo containing all the custom components of the Astria network, a decentralized system that replaces traditional sequencers, offering a shared, permissionless sequencer network.

Astria 6 Jun 7, 2023
Joystream Monorepo

Joystream This is the main code repository for all Joystream software. In this mono-repo you will find all the software required to run a Joystream ne

Joystream 152 Dec 28, 2022
Joystream Monorepo

Joystream This is the main code repository for all Joystream software. In this mono-repo you will find all the software required to run a Joystream ne

Joystream 152 Dec 28, 2022
Project Serum Rust Monorepo

serum-dex Project Serum Rust Monorepo Website | Discord | Awesome | DEX | TypeScript Program Deployments Program Devnet Mainnet Beta DEX DESVgJVGajEgK

Project Serum 564 Dec 31, 2022
Universal changelog generator using conventional commit+ with monorepo support. Written in Rust.

chlog Universal changelog generator using conventional commit+ with monorepo support. chlog can generate the changelog from the conventional commits w

Jeff Yang 3 Nov 27, 2022
Monorepo for dprint—a pluggable and configurable code formatting platform

dprint Monorepo for dprint—a pluggable and configurable code formatting platform. This project is under active early development. I recommend you chec

null 1.7k Jan 8, 2023
Monorepo of our ahead-of-time implementation of Secure ECMAScript

Ahead-of-time Secure EcmaScript The monorepo contains a set of packages that helps adopt SES in a pre-compiled way. Security Assumptions This project

Dimension 13 Dec 21, 2022
A monorepo containing all the custom components of the Astria network

A monorepo containing all the custom components of the Astria network, a decentralized system that replaces traditional sequencers, offering a shared, permissionless sequencer network.

Astria 6 Jun 7, 2023