Web-Scale Blockchain for fast, secure, scalable, decentralized apps and marketplaces.

Overview

Solana

Solana crate Solana documentation Build status codecov

Building

1. Install rustc, cargo and rustfmt.

$ curl https://sh.rustup.rs -sSf | sh
$ source $HOME/.cargo/env
$ rustup component add rustfmt

When building the master branch, please make sure you are using the latest stable rust version by running:

$ rustup update

When building a specific release branch, you should check the rust version in ci/rust-version.sh and if necessary, install that version by running:

$ rustup install VERSION

Note that if this is not the latest rust version on your machine, cargo commands may require an override in order to use the correct version.

On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, etc. On Ubuntu:

$ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang make

On Mac M1s, make sure you set up your terminal & homebrew to use Rosetta. You can install it with:

$ softwareupdate --install-rosetta

2. Download the source code.

$ git clone https://github.com/solana-labs/solana.git
$ cd solana

3. Build.

$ cargo build

Testing

Run the test suite:

$ cargo test

Starting a local testnet

Start your own testnet locally, instructions are in the online docs.

Accessing the remote development cluster

  • devnet - stable public cluster for development accessible via devnet.solana.com. Runs 24/7. Learn more about the public clusters

Benchmarking

First install the nightly build of rustc. cargo bench requires use of the unstable features only available in the nightly build.

$ rustup install nightly

Run the benchmarks:

$ cargo +nightly bench

Release Process

The release process for this project is described here.

Code coverage

To generate code coverage statistics:

$ scripts/coverage.sh
$ open target/cov/lcov-local/index.html

Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer productivity metric. When a developer makes a change to the codebase, presumably it's a solution to some problem. Our unit-test suite is how we encode the set of problems the codebase solves. Running the test suite should indicate that your change didn't infringe on anyone else's solutions. Adding a test protects your solution from future changes. Say you don't understand why a line of code exists, try deleting it and running the unit-tests. The nearest test failure should tell you what problem was solved by that code. If no test fails, go ahead and submit a Pull Request that asks, "what problem is solved by this code?" On the other hand, if a test does fail and you can think of a better way to solve the same problem, a Pull Request with your solution would most certainly be welcome! Likewise, if rewriting a test can better communicate what code it's protecting, please send us that patch!

Disclaimer

All claims, content, designs, algorithms, estimates, roadmaps, specifications, and performance measurements described in this project are done with the Solana Foundation's ("SF") good faith efforts. It is up to the reader to check and validate their accuracy and truthfulness. Furthermore, nothing in this project constitutes a solicitation for investment.

Any content produced by SF or developer resources that SF provides, are for educational and inspirational purposes only. SF does not encourage, induce or sanction the deployment, integration or use of any such applications (including the code comprising the Solana blockchain protocol) in violation of applicable laws or regulations and hereby prohibits any such deployment, integration or use. This includes use of any such applications by the reader (a) in violation of export control or sanctions laws of the United States or any other applicable jurisdiction, (b) if the reader is located in or ordinarily resident in a country or territory subject to comprehensive sanctions administered by the U.S. Office of Foreign Assets Control (OFAC), or (c) if the reader is or is working on behalf of a Specially Designated National (SDN) or a person subject to similar blocking or denied party prohibitions.

The reader should be aware that U.S. export control and sanctions laws prohibit U.S. persons (and other persons that are subject to such laws) from transacting with persons in certain countries and territories or that are on the SDN list. As a project based primarily on open-source software, it is possible that such sanctioned persons may nevertheless bypass prohibitions, obtain the code comprising the Solana blockchain protocol (or other project code or applications) and deploy, integrate, or otherwise use it. Accordingly, there is a risk to individuals that other persons using the Solana blockchain protocol may be sanctioned persons and that transactions with such persons would be a violation of U.S. export controls and sanctions law. This risk applies to individuals, organizations, and other ecosystem participants that deploy, integrate, or use the Solana blockchain protocol code directly (e.g., as a node operator), and individuals that transact on the Solana blockchain through light clients, third party interfaces, and/or wallet software.

Comments
  • Use QUIC client in voting service (backport #23713)

    Use QUIC client in voting service (backport #23713)

    This is an automatic backport of pull request #23713 done by Mergify.


    Mergify commands and options

    More conditions and actions can be found in the documentation.

    You can also trigger Mergify actions by commenting on this pull request:

    • @Mergifyio refresh will re-evaluate the rules
    • @Mergifyio rebase will rebase this PR on its base branch
    • @Mergifyio update will merge the base branch into this PR
    • @Mergifyio backport <destination> will backport this PR on <destination> branch

    Additionally, on Mergify dashboard you can:

    • look at your merge queues
    • generate the Mergify configuration with the config editor.

    Finally, you can contact us on https://mergify.com

    automerge 
    opened by mergify[bot] 145
  • Add the ability to sign a message to the Solana Ledger app

    Add the ability to sign a message to the Solana Ledger app

    Problem

    At Phantom, we've implemented a signMessage feature that allows an application to request signing an arbitrary message. This feature is being used by Audius on Solana to authenticate users, and is popular on Ethereum for applications like OpenSea that also rely on it to authenticate their users.

    A long-standing issue with this feature is that the Solana Ledger app (https://github.com/LedgerHQ/app-solana) does not support signing arbitrary messages, and so this feature does not work for Ledger users.

    Proposed Solution

    Add support for signing arbitrary data (or just utf8 strings) to the Solana Ledger app. @t-nelson mentioned that support would need to be added to solana-remote-wallet and the CLI as well.

    opened by fragosti 90
  • Add bignum syscalls

    Add bignum syscalls

    Problem

    The problem is that Solana did not have bignum for solana programs

    Summary of Changes

    Add syscalls to cover fundamental bignum operations at a fair price to execution cost This PR is the successor to #17082

    Fixes #

    Draft reviewers: @jackcmay @arthurgreef @seanyoung

    stale community 
    opened by FrankC01 71
  • Move account data to persistent storage

    Move account data to persistent storage

    Problem

    The cost of RAM is much higher that adds up to the cost of operating a full node (16GB of RAM is the same cost as 500GB of high speed NVMe SSDs). Look into ways to reduce the RAM usage by moving some of the data onto SSDs and have them loaded / stored on demand.

    Summary of Changes

    Implements #2769

    To help reduce RAM usage of the nodes, persist storage of accounts across NVMe SSDs and load / store them on a need basis from SSDs.

    Store account information across two files: Index and Data Index: Contains offset into data Data: Contains the length followed by the account data

    The accounts are split across NVMe SSDs using the pubkey as the key.

    TODOs:

    • [x] The account data is stored currently across 2 directories and need to be changed to use NVMe
    • [x] Look into performance bottleneck due to using the storage and look to see if the account could be partitioned across multiple directories to parallelize load and store operations.
    • [x] Error handling and remove files across runs

    Snapshot and version numbering is not planned for this release.

    Fixes # #2499

    opened by sambley 65
  • Feature Proposal: Revert VoteStateUpdate and use a different method for fixing issue 20014

    Feature Proposal: Revert VoteStateUpdate and use a different method for fixing issue 20014

    Problem

    VoteStateUpdate is a costly and complex solution to issue 20014 when simpler and more performant solutions are available.

    Proposed Solution

    • The issue is that when a validator switches forks, its local vote_state's lockouts are not correct with respect to the switched-to fork. The example given is votes on 1 - 2 - 3 (dead fork, switched away from) - 6 (new fork, switched to). The rest of the cluster pops 1 - 2 since their lockouts were never doubled, but the local validator doesn't pop 1 - 2 because it thought it voted on 3 which doubled 1 - 2's lockouts.

    • The VoteStateUpdate solution is to get the rest of the cluster to sync to the local validator's lockouts by basically demonstrating that "see, I voted on 3, so my lockouts doubled, even though they didn't really because my vote on 3 is not part of the real block history, it's on a dead fork". The implementation of this is very complicated and performance-costly. AshwinSekar is trying to reduce the costs but it's likely still way too costly. I think that testnet may be experiencing performance problems just due to the increased load of handling VoteStateUpdate instead of Vote.

    • Instead I think the solution should be to have the local validator fix its lockouts when switching forks. It is possible for the validator to update its own local lockouts so that they reflect what the rest of the cluster has computed, instead of forcing the rest of the cluster to use the local validators lockouts.

    • This is accomplished most readily by having the tower save a copy of its lockouts right before each slot it votes on. Then when it switches forks, it finds the most recent slot voted on that is a common ancestor between the forks, and resets its vote_state's lockouts to the lockouts that existed immediately after the vote on that slot. Now it is in sync with the rest of the cluster, and applying the new slot to the tower for the new vote on the new fork does exactly what the rest of the cluster thinks it does.

    A validator still must observe proper lockouts when switching from the switched-from fork to the switched-to fork. It does this by retaining its lockouts while "still on that fork, waiting for lockouts to expire", and only updating the local vote_state with the saved lockouts as it is casting its first vote on a slot on the new fork. The validator has already ensured that it is safe to vote on the new fork before my proposed change has any effect on its vote_state's lockouts, so it can't be any less safe in terms of avoiding lockout violations, than the existing implementation.

    opened by bji 61
  • Create bank snapshots

    Create bank snapshots

    Problem

    Full node startup is slow when many transactions have been processed as this involved querying for the full ledger and replaying it.

    Summary of Changes

    Serialize bank state into a snapshot and try to restore from that on boot.

    Fixes #2475

    opened by sambley 55
  • Add full spend recent blockhash sentinel

    Add full spend recent blockhash sentinel

    Problem

    Custodial services need much longer to sign a transaction than our current replay protection allows.

    Summary of Changes

    Remove dupe code Preparatory return type adjustment Introduce full spend recent blockhash sentinel value

    Fixes #1982

    This is a first pass. I couldn't find where we ever came to a conclusion as to whether or not consensus should validate that the transaction indeed fully spends the account balance, so I didn't implement that here.

    opened by t-nelson 53
  • The solana-web3.js transaction confirmation logic is very broken

    The solana-web3.js transaction confirmation logic is very broken

    The correct approach to confirming a normal transaction is to fetch the last valid block height for the transaction blockhash, then wait until the chain has advanced beyond that block height before declaring the transaction expired.

    A transaction that uses a nonce is a little different, since those transactions are technically valid until the nonce is advanced. For nonce-based transactions, the right approach is to check the corresponding nonce account to see if the nonce has been advanced while periodically retrying the transaction (although some form of timeout or mechanism for the caller to abort seems nice)

    The big offenders:

    1. confirmTransaction

    This method simply waits for up to 60 seconds. This is wrong because a transaction's blockhash may be valid for longer than 60 seconds, so it's very possible for the transaction to execute after this method returns a failure. The signature of confirmTransaction does not include the blockhash of the transaction so it's literally impossible to do the right thing without a signature change. This method probably just should be deprecated.

    2. sendAndConfirmTransaction and sendAndConfirmRawTransaction

    These two just need to be reimplemented to not use confirmTransaction and instead follow the approach outlined at the start of this issue.

    opened by mvines 48
  • AccountsDb: Don't use threads for update_index

    AccountsDb: Don't use threads for update_index

    Problem

    I was profiling banking-bench runs with some-batch-only contention and noticed a lot of time was spent in update_index(). That makes sense, because len is 3 or something and splitting that up into multiple threads is primarily overhead.

    But when I played with having a min_chunk_size I realized that dropping the parallelism also sped up the none contention case (where len > 256). Hence this patch just removes it completely.

    banking-bench results:

               none      same-batch-only       full
    before    96451          7967              3098
    after    133738         15197             10504
    

    (--packets-per-batch 128 --batches-per-iteration 6 --iterations=1200 for none, 200 otherwise)

    Summary of Changes

    Drop parallelism from update_index() in accountsdb.

    stale community 
    opened by ckamm 46
  • Incremental Snapshots

    Incremental Snapshots

    Problem

    Startup time for nodes is slow when having to download a large snapshot.

    Proposed Solution

    Incremental snapshots! Instead of always having to download large, full snapshots, download a full snapshot once/less often, then download a small incremental snapshot. The expectation/hope/thought is that only a small number of accounts are touched often, so use incremental snapshots to optimize for that behavior. At startup, now a node with an existing full snapshot only needs to download a new, small incremental snapshot.

    • Create full snapshots much less often (?)
      • every 100,000 slots? at epoch? SLOTS_PER_EPOCH / 2 - 1?
    • Create incremental snapshots every 100 slots (?)
    • Each incremental snapshot is the difference from the last full snapshot
    • Old incremental snapshots can be cleaned up, but save at least one extra as fallback
    • Add a new snapshot field to gossip to differentiate between full and incremental snapshots
      • The gossip info for incremental snapshots will need to include the slot of the full snapshot that this incremental snapshot is based on

    Example

    slot 100,000: full snapshot (A)
    slot 100,100: incremental snapshot (B)
    slot 100,200: incremental snapshot (C)
    ...
    slot 1xx,x0: incremental snapshot (D)
    ...
    slot 200,000: full snapshot (E)
    
    • Incremental snapshot (ISS for short) B is the diff between full snapshot (FSS) A and ISS B. Similarly, ISS C = diff(A, C), and so on.
    • The latest snapshot is still the valid snapshot. If the latest snapshot is an incremental snapshot, replay the FSS then the ISS.
    • Incremental snapshots older than a full snapshot can be deleted (i.e. FSS E supersedes FSS A, and ISS B, C, and D).
    • When ISS D is created, ISS B can be deleted.
    • If at a slot between D and E, a new node would query for FSS A and then ISS D through gossip.

    Details

    Storing an Incremental Snapshot

    1. Get the slot from the last full snapshot
    2. Snapshot the bank (same as for FSS)
    3. Snapshot the status cache (slot deltas) (same as FSS)
    4. Package up the storages (AppendVecs) from after the FSS
    5. Make archive

    Loading from an Incremental Snapshot

    1. Get the highest full snapshot as done now
    2. Get the highest incremental snapshot based on the full snapshot from above
    3. Extract full snapshot
    4. Extract incremental snapshot
    5. Rebuild the AccountsDb from the storages in both FSS and ISS
    6. Rebuild the Bank from the ISS

    Validator

    • new CLI args for setting ISS interval
    • loading ISS at startup
    • creating ISS periodically
    • discovering and downloading ISS in bootstrap

    Background Services

    • AccountsBackgroundService will need to know about the last FSS slot, as to not clean past it
    • AccountsBackgroundService will now decide based on the full/incremental snapshot interval if the snapshot package will be a FSS or an ISS
    • AccountsHashVerifier no longer needs to decide full vs incremental
    • SnapshotPackagerService is largely unchanged

    AccountsDb

    • Update clean_accounts() to add a new parameter, last_full_snapshot_slot to not clean zero-lamport accounts above the last FSS slot

    Ledger Tool

    • Update CLI args to set maximum number of incremental snapshots to retain

    RPC

    • Add support for downloading incremental snapshots

    Gossip

    • Add incremental snapshot hashes to CrdsData

    Bootstrap

    • Discover and download incremental snapshots during bootstrap

    Testing

    Unit Tests

    • [x] snapshot_utils: roundtrip bank to snapshot to bank for FSS
    • [x] snapshot_utils: roundtrip bank to snapshot to bank for ISS
    • [x] snapshot_utils: cleanup zero-lamport accounts in slots after FSS

    Integration Tests

    core/tests/snapshots.rs
    • [x] Make a similar test as the bank forks test, but also creating incremental snapshots
    • [x] Make a new test that spins up all the background services and ensures FSS and ISS are taken at the correct intervals, and they deserialize correctly
    local_cluster
    • [x] Make a similar test that generates ISS on one node, and the other node downloads then loads from it
    • [x] Make a test for startup processing new roots past full snapshot interval

    Questions

    • ~how often should incremental snapshots be created?~
      • ~more is good when (re)joining (faster startup time?), but less is good for the running node (less resource utilization?)~
      • ~does it matter when incremental snapshots would be made? Like after/before certain cleanup code?~
    • ~should incremental snapshots only exist locally, or should they also be sent to new nodes~
      • ~i'm guessing we want to send incremental snapshots to new nodes as well, so they start up faster~
    • ~what goes in the incremental snapshot?~
      • ~is it all the same data types as a full snapshot, just the delta since the last snapshot?~
    • ~should full snapshots be created less/same/more frequently now?~
      • ~likely not more... but there for completeness~
      • ~still need full snapshots for a new node joining the network~
    • ~what tests are needed?~
      • ~obviously a test to make sure it works~
      • ~ensure fallback to full snapshot works if an incremental snapshot is borked~

    Related Work

    Original Snapshot Work

    • Original issue: #2475
    • PR 1: #3671
    • PR 2: #4244

    Future Work

    • Dynamically decide when to generate full and incremental snapshots.
    • With the current implementation, it's highly beneficial if nodes use the same full snapshot interval. This is so at bootstrap if a node needs to download a snapshot, and already has a full snapshot, it's most likely to not need to download another full snapshot, and instead just the incremental snapshot. More discovery methods or decisions could be added to RPC/bootstrap to better support different full snapshot intervals.

    Tasks

    • [x] #18822
    • [x] #18829
    • [x] #18972
    • [x] #18973
    • [x] #18825
    • [x] #19014
    • [x] #18828
    • [x] #18813
    • [x] #18826
    • [x] #19297
    • [x] #19065
    • [x] #18639
    • [x] #19579
    • [x] #19296
    • [x] #19857
    • [x] #19856
    • [x] #20375
    • [x] #19885
    • [x] #20423
    • [x] #18641
    • [x] #18824
    • [x] #18815
    locked issue 
    opened by brooksprumo 40
  • Introduce eager rent collection

    Introduce eager rent collection

    Problem

    rent-due (!= rent-exempt) accounts are never garbage-collected unless updated (i.e. lazy collection), which accumulates over time and can be a DoS attack vector.

    Also, the current rent collection mechanism has some gaming factor due to being lazy. (noted below)

    Even more, there is some UX issue for the lazy rent collection.

    Summary of Changes

    • scan the whole pubkey domain space over an epoch (or 2 day (if epoch is less than 2 day and not warmed-up); ~not implemented yet~ implemented) for each slot progress, while updating all accounts.
      • (cons) AccountIndex was switched from HashMap to BTreeMap for range queries to realize that
    • (pro) thus, maximum number of AppendVec is now bound to the number of slots of given epoch, combined with the recently-introduced per-slot shrinking #9219.
      • Also, this fixes unbound increase of AccountsIndex.roots.
    • as a side effect, the rent collection schedule is now universally consistent: all existing accounts and newly-created accounts both alike are collected for the next epoch for each epoch exactly once.
      • (pro) technically, we can remove account.rent_epoch as a bonus. (ACCOUNT_STORAGE_OVERHEAD: 128 -> 120, -6.25% for IO bandwidth)
    • the iteration algorithm intentionally designed to be deterministic as a requirement because rent collection generally affects account delta bank hash, distribution, and capitalization. Also it's stete-less as desired to avoid ABI breakage.
    • (pro) the system load is well spread across an epoch because we can assume vast of pubkey space is uniformly distributed.
    • (pro) this doesn't introduce ABI break; only an epoch-triggered enactment if guard is needed.
    minor problem of current rent colletion

    Also I found that the lazy rent collection is problematic because delayed rent collection doesn't account for past rent fees were different or not. That means the current calculation is based on the latest rent fee multiple of epochs. it's not the integral of past actual (dynamic) rent fees over epochs. This means it introduces some gaming much like recently removed redeem-vote-credits (#7916): A validator or an account owner might send 0 lamports to rent-due accounts to collect/save rent when the at-the-time dynamic rent is more profitable. Although, the dynamic rent isn't still implemented...

    (cons) new dos vector

    An attacker can send lamports to arbitrary address if prepared to burn them so we're vulnerable to a new attack vector of targeted DoS (send so many pubkeys under some subrange corresponding to victim leader's slot), given publicly available deterministic next epoch's leader schedule. But, I think that's tolerable given that there is already easier attack with similar threat profile: just flood the victim leader by burst TXes for some specific slot. Still, If forced to mitigate, I think cloning the whole Pubkey set is required and then .iter().chunk(.len() / SLOT_COUNT)-ing on it.

    Alternatives:

    • scan whole account_index and collect rent when new epoch begins
      • (pro) most straightforward to reason and to implement.
      • (cons) too much sudden system load at the start of new epoch
    • upfront clone() of account_index.keys() at each epoch boundary and process .chunk() of it for each slot like this PR.
      • (pro) no need to resort to BTreeMap.
      • (cons) moderate cluster hiccup and ABI break on Bank/Snapshot; also must include that into trust chain in some way.
    • lookup by slot/slot range:
      • (cons) account updates aren't uniformly distributed across the slots in general
    • slot coalescence (WIP: #9319):
      • (cons) as pointed, single AppendVec for several slots showed performance degradation in the past. Also, this solution still doesn't solve stale rent-due accounts and gaming problem.
    • stale slot/AppendVec eviction/unmap by LRU and on-demand mmap after eviction:
      • (cons) still too many unbound-number-of small AppendVec in the snapshot
    • cold-store stale accounts:
      • (cons) too much effort to implement; could implemented later if there is so much account storage need. hampers the accounts index under huge number of active pubkeys.
      • (cons) also, we need to revise economic design/fee for executing TXes on accounts in the cold-storage; otherwise, economically-efficient DoS attack is possible
    • maintain a HashSet<Pubkey> for rent-due accounts, separately:
      • (cons) can be another DoS vector when those account set gets large and we need a deterministic iteration algorithm.

    (Also, I've lightly researched the rent mechanism of other blockchain projects and found there is no better solution. ;)

    todo (once the direction is confirmed)

    • ~[ ] update docs for rents~ (let's do with another future pr, this pr is already big). #10348
    • [x] hard fork logic
    • [x] write/fix tests
    • [x] refactor the copy and paste
    • [x] check perf. degradation of using BTreeMap over HashMap (doing right now)
    • [x] warmup epochs & rent? what to do? (DONE: introduced constant cycle collection).

    Fixes #9383

    opened by ryoqun 40
  • require repair request signature, ping/pong for Testnet, Development clusters (backport #29351)

    require repair request signature, ping/pong for Testnet, Development clusters (backport #29351)

    This is an automatic backport of pull request #29351 done by Mergify. Cherry-pick of 832302485e61cc0f359b91f6c8f0c383caaf6629 has failed:

    On branch mergify/bp/v1.14/pr-29351
    Your branch is up to date with 'origin/v1.14'.
    
    You are currently cherry-picking commit 832302485.
      (fix conflicts and run "git cherry-pick --continue")
      (use "git cherry-pick --skip" to skip this patch)
      (use "git cherry-pick --abort" to cancel the cherry-pick operation)
    
    Changes to be committed:
    	modified:   core/src/ancestor_hashes_service.rs
    
    Unmerged paths:
      (use "git add <file>..." to mark resolution)
    	both modified:   core/src/serve_repair.rs
    
    

    To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/github/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally


    Mergify commands and options

    More conditions and actions can be found in the documentation.

    You can also trigger Mergify actions by commenting on this pull request:

    • @Mergifyio refresh will re-evaluate the rules
    • @Mergifyio rebase will rebase this PR on its base branch
    • @Mergifyio update will merge the base branch into this PR
    • @Mergifyio backport <destination> will backport this PR on <destination> branch

    Additionally, on Mergify dashboard you can:

    • look at your merge queues
    • generate the Mergify configuration with the config editor.

    Finally, you can contact us on https://mergify.com

    conflicts 
    opened by mergify[bot] 0
  • wip: set rent epoch to max on new account creation

    wip: set rent epoch to max on new account creation

    Problem

    New accounts must be rent exempt. Rent exempt accounts will get rent_epoch=u64::MAX. Rent collection is currently what sets this. At some point we want to turn rent collection off.

    Summary of Changes

    New account creation will set rent_epoch to u64::MAX.

    Fixes #

    opened by jeffwashington 0
  • Adds SnapshotError::IoWithSourceAndFile

    Adds SnapshotError::IoWithSourceAndFile

    Problem

    We often have errors in the snapshot code that are from IO std functions. It is useful to know the path of the file that caused the error, but that's not available without creating custom errors. Since that happens in a number of places, it'd be convenient to have a SnapshotError that takes a path to automatically create the error string.

    Summary of Changes

    Add SnapshotError::IoWithSourceAndFile, and use it in a few places.

    Here's what the various errors look like. The lines are:

    1. ErrorKind
    2. SnapshotError::Io
    3. SnapshotError::IoWithSource
    4. SnapshotError::IoWithSourceAndFile

    Output for Display

                                  err: io error test string
                          err from io: I/O error: io error test string
              err from io with source: source(source string) - I/O error: io error test string
     err from io with source and file: source(source string) - I/O error: io error test string, file: file path
    

    Output for Debug

                                  err: Custom { kind: Other, error: "io error test string" }
                          err from io: Io(Custom { kind: Other, error: "io error test string" })
              err from io with source: IoWithSource(Custom { kind: Other, error: "io error test string" }, "source string")
     err from io with source and file: IoWithSourceAndFile(Custom { kind: Other, error: "io error test string" }, "source string", "file path")
    
    opened by brooksprumo 0
  • Add new vote state version that replaces Lockout with LandedVote to a…

    Add new vote state version that replaces Lockout with LandedVote to a…

    …llow

    vote latency to be tracked in a future change. Includes a feature to be enabled which will when enabled cause the vote state to be written in the new form.

    Problem

    See timely vote credits proposal.

    Summary of Changes

    Added new VoteStateVersions Current. Moved existing Current to 1_14_11. Same for Tower. Added feature for enabling storage of new vote state version. Implemented method for updating storage to new vote state version only after the feature is enabled, resizing the vote account as necessary. Other mechanical changes to support these modifications.

    community 
    opened by bji 1
Releases(v1.14.11)
Owner
Solana Foundation
Solana Foundation
A vertically scalable stream processing framework focusing on low latency, helping you scale and consume financial data feeds.

DragonflyBot A vertically scalable stream processing framework focusing on low latency, helping you scale and consume financial data feeds. Design The

null 17 Jul 12, 2023
Marvin-Blockchain-Rust: A Rust-based blockchain implementation, part of the Marvin blockchain project.

Marvin Blockchain - Rust Implementation Welcome to the Rust implementation of the Marvin Blockchain. This project is part of a comparative study on bu

João Henrique Machado Silva 3 Sep 6, 2024
A PoC backbone for NFT Marketplaces on NEAR Protocol

NFT Market Reference Implementation A PoC backbone for NFT Marketplaces on NEAR Protocol. Reference Changelog Changelog Progress: basic purchase of NF

null 9 May 26, 2022
An open source, high performance limit order book for the Seaport smart contracts. Implemented in Rust using ethers-rs, this offers a turnkey option for digital asset marketplaces.

Quay Quay is an open source, high performance backend for the Seaport smart contracts. The project is implemented in Rust, using Postgres as a storage

Valorem Labs Inc. 169 Jun 23, 2023
Aptos-core strives towards being the safest and most scalable layer one blockchain solution.

Aptos-core strives towards being the safest and most scalable layer one blockchain solution. Today, this powers the Aptos Devnet, tomorrow Mainnet in order to create universal and fair access to decentralized assets for billions of people.

Aptos Labs 4.7k Jan 6, 2023
The Decentralized and Scaled Blockchain

Massa: The Decentralized and Scaled Blockchain Massa is a truly decentralized blockchain controlled by thousands of people. With the breakthrough mult

null 1.2k Dec 31, 2022
Polkadex - An Orderbook-based Decentralized Exchange using the Substrate Blockchain Framework.

What is Polkadex? ?? Polkadex is a Open Source, Decentralized Exchange Platform made using Substrate Blockchain Framework that provides traders with t

Polkadex 243 Dec 16, 2022
[Open Source] Blockchain Decentralized Lightweight VPN in Rust

[Open Source] Blockchain Decentralized Lightweight VPN in Rust DCVPN_Rust (Decentralized VPN in Rust) is an open-source initiative started by @anandgo

Anand Gokul 29 Jun 2, 2023
Easy to use cryptographic framework for data protection: secure messaging with forward secrecy and secure data storage. Has unified APIs across 14 platforms.

Themis provides strong, usable cryptography for busy people General purpose cryptographic library for storage and messaging for iOS (Swift, Obj-C), An

Cossack Labs 1.6k Dec 30, 2022
A secure development tool box and fintech application made with Rust to be used for developing cryptocurrencies on the blockchain.

Crypto Fintech Tools for Rust (CFT) Dependencies Rust MacOS Homebrew # xcode cli tools xcode-select --install # install dependencies using Homebrew b

Phil Hills 1 Apr 15, 2022
User-friendly secure computation engine based on secure multi-party computation

CipherCore If you have any questions, or, more generally, would like to discuss CipherCore, please join the Slack community. See a vastly extended ver

CipherMode Labs 356 Jan 5, 2023
Summer Boot (web2&web3, the decentralized web framework)

Summer Boot The next generation decentralized web framework allows users to manage and share their own data. It will be a wide area and cross regional

Summer 111 Dec 1, 2022
A fast and secure multi protocol honeypot.

Medusa A fast and secure multi protocol honeypot that can mimic realistic devices running ssh, telnet, http, https or any other tcp and udp servers. W

Simone Margaritelli 268 Dec 26, 2022
Scalable layer-2 registry and prover for subspaces

Subspacer Note: this does not fully implement the functionality described in the Spaces protocol yet and should be considered a proof of concept. Scal

Spaces Protocol 5 Feb 22, 2024
Tradechain is an open source blockchain designed for fast trading & interoperability for new, existing assets

Tradechain is an open source blockchain designed for fast trading & interoperability for new, existing assets. Help build the future of trading with other Tradians.

Matt Shaver 5 Jul 5, 2022
Open source Rust implementation of the Witnet decentralized oracle protocol, including full node and wallet backend 👁️🦀

witnet-rust is an open source implementation of the Witnet Decentralized Oracle Network protocol written in Rust. Components witnet-rust implements ma

The Witnet Project 155 Nov 21, 2022
Nym provides strong network-level privacy against sophisticated end-to-end attackers, and anonymous transactions using blinded, re-randomizable, decentralized credentials.

The Nym Privacy Platform The platform is composed of multiple Rust crates. Top-level executable binary crates include: nym-mixnode - shuffles Sphinx p

Nym 653 Dec 26, 2022
Blue Terra is a decentralized protocol for the global democratization of stable and accessible property rights.

??️ The official Blue Terra KYC program. Blue Terra holders interact with this program to activate the leases embedded in their NFTs.

Blue Terra 1 Jan 31, 2022
Demo: Connect Phala's Fat Contract to external storage services, both centralized (Amazon s3) and decentralized .

This demo shows how to connect Phala's Fat Contract to external storage services, both centralized (Amazon s3) and decentralized (Arweave/Filecoin thr

Christopher Fok 2 Aug 30, 2022