A new blockchain architecture under active development, with a strong focus on scalability, privacy and safety

Overview

Project Slingshot

Accelerating trajectory into interstellar space.

Slingshot is a new blockchain architecture under active development, with a strong focus on scalability, privacy and safety.

The Slingshot project consists of the following components:

Demo

Demo node where one can create transactions and inspect the blockchain.

ZkVM

ZkVM is a transaction format with cloaked assets and zero-knowledge smart contracts.

Blockchain

Abstract blockchain state machine for the ZkVM transactions.

Spacesuit

Interstellar’s implementation of Cloak, a confidential assets protocol based on the Bulletproofs zero-knowledge circuit proof system.

Starsig

A pure Rust implementation of the Schnorr signature scheme based on ristretto255.

Musig

A pure Rust implementation of the Simple Schnorr Multi-Signatures by Maxwell, Poelstra, Seurin and Wuille.

Keytree

A key blinding scheme for deriving hierarchies of public keys for Ristretto-based signatures.

Merkle

A Merkle tree API for computing Merkle roots, making and verifying Merkle proofs. Used for ZkVM transaction IDs, Taproot implementation and Utreexo commitments.

Based on RFC 6962 Section 2.1 and implemented using Merlin.

Accounts

API for managing accounts and receivers. This is a building block for various payment protocols.

P2P

Small p2p networking library that implements peer management logic with pluggable application logic. Implements symmetric DH handshake with forward secrecy.

Reader/Writer

Simple encoding/decoding and reading/writing traits and utilities for blockchain data structures.

Comments
  • keytree: store precomputed transcripts on Xprv, Xpub

    keytree: store precomputed transcripts on Xprv, Xpub

    This is Step 4 in https://github.com/interstellar/slingshot/pull/202#discussion_r263216987, working to make sure that intermediate key derivation isn't failable. This PR also updates the spec to match this change.

    opened by tessr 14
  • blockchain: scalability plan

    blockchain: scalability plan

    Problem

    The blockchain scalability problem has 4 facets:

    1. Bandwidth needed to receive all transactions happening on the network.
    2. CPU cost to verify all these transactions.
    3. State size that every node must maintain to prevent double-spending.
    4. Bootstrapping cost: how a new node is supposed to join the network.

    We are not including "SPV/light clients" as those are more like optimizations to the above with security tradeoffs. The exact nature of "light client" protocols depends on how those 4 problems are addressed.

    Current situation

    The current way to address these issues is:

    Bandwidth is minimized by using aggregated signatures, aggregated zero-knowledge proofs and keeping amount of data and computation on-chain at minimum. It can be further minimized by offloading majority of payment transactions into the lightning network.

    • A 2-input/2-output transaction occupies about 1.5 Kb.
    • A 32x32 transaction takes about 8.5 Kb, which is equivalent to a marginal cost of ≈0.5 Kb per individual 2x2 transfer.
    • In larger transactions commitments and input/output data dominates the bandwidth, keeping the marginal cost "per payment" slightly below 0.5 Kb.

    Averaging the best/worst cases to 1 Kb per payment, and having 500 payments per second, the avg required bandwidth is 500 KB/s at maximum load. With more optimized network, where only 50 payments/sec happen on-chain, with 0.5Kb per payment, the bandwidth could be brought down to 25 KB/s.

    CPU cost is kept at minimum likewise: all expensive operations are minimized, unified and processed in batch with state-of-the-art implementation. A 2x2 transaction can be verified in about 2ms, allowing ≈500 payment/sec throughput per 3GHz CPU core (≈1000 inputs/sec). There's room of about 30-50% of further optimization.

    State is mostly a UTXO set that grows slowly with user base and has favorable access pattern (~Zipf's law): older UTXOs are spent more rarely than newer UTXOs. The UTXO set consists of N 32-byte hashes of "output structures", so for N users each having M payment channels or idle outputs, the state is N*M*32 bytes. So a 10M users with 4 outputs on average produces a 1.2 Gb state (only smaller fraction of fresh utxos needs to reside in RAM).

    ZkVM also uses nonces that ensure tx uniqueness. These must be all stored to be checked against reuse, but they can have an expiration date.

    Another piece of state is a trail of "recent block headers" that can be referenced by nonces. The number of those should be covering at least 24h so multi-party transactions could get co-signed with a fresh nonce and published w/o having to retry because of the block reference sliding away. These are sufficiently small to not be a bottleneck (a few megabytes at most).

    Bootstrapping requires either of two things:

    1. Replaying the entire blockchain, cost of which is bandwidth multiplied by time (the optimized network with 25 KB/s bandwidth would then require downloading 769 Gb per year of the history),
    2. or bootstrapping the state from a committed snapshot verified by the entire network.

    The option (2) reduces the bootstrap cost to the cost of cloning an existing node's state, which is a significant upside. The downsides are:

    a. different security model: one needs to trust that the entire network has followed the rules up till the recent point, and if everyone prunes the history, there may not be data left available for audit; b. extra load on the network nodes to verify and update state commitments, in addition to verification of transactions.

    Considering the cost of alternative and existence of fairly efficient schemes for state commitments (especially compared to the cost of verifying zero-knowledge proofs), the need for a state commitment scheme is evident. Currently we use a patricia merkle tree (aka "radix tree") for secure commitment and incremental updates to the set of utxos and nonces.

    Optimizing state size

    The utxo set growth is the next biggest concern.

    1. UTXO set size is proportional to the number of users, therefore strategy "push most activity to the payment channel" does not affect its growth. And a growth of the federated network can be inhibited by requirement to keep a large state, limiting the number and diversity of participants (1B utxos == 32 Gb).
    2. UTXO set size, like the bandwidth is a costly externality: it is one-time cost for the user to create it, and perpetual cost for everyone else to maintain it. This means that it's likely that more than necessary utxos will be created.

    Solving this problem via economics with "rent fees" or alike is a measure of last resort, since that solution would create its own technical issues. It is much better to solve it at a technical level, so we do not need to introduce additional assumptions about behavior of participants. For instance, by making the nodes keep a O(1) or O(log(n)) commitment to a UTXO set and letting the users provide proofs of membership of their own UTXOs, plus an efficient algorithm to update that commitment with only the partial data from the users' proofs.

    Similar to UTXO set, nonces pose two issues of their own:

    1. if the set of nonces is capped by time, then we allow unbounded growth that must be countered by some additional rules.
    2. if the set of nonces is capped by size, then we risk DoS attacks where malicious users exhaust the quota for the honest users.

    Again, a proper technical solution would permit nodes keeping O(1) or O(log(n)) commitment to the set of nonces, allowing users proving non-membership of their nonces. This could allow us eliminating the need for expiration time entirely, streamlining the ergonomics of higher-order protocols.

    Optimizing bandwidth

    Finally, once the scalability of the blockchain state is solved, the last problem is data/CPU bandwidth.

    Data bandwidth can be significantly optimized with a well-oiled payment channel network (lightning): most of the financial operations are payments, and even some sophisticated financial contracts are still structured as payments (e.g. futures). This means, in principle, the entire blockchain network activity can consist mostly of closing and opening of channels and occasional high-value transfers.

    CPU bandwidth follows the data bandwidth, but we can also have an additional performance win from batching multiple proofs. E.g. verifying an entire block of 50-100 transactions is 20-24x faster than verifying them one by one. And this ratio is the same for whether those transactions are small, or big multi-party joint transactions. This means, validators can minimize the consensus latency by verifying proposed blocks in batches, and also lower-power devices can keep up with the chain at a lower CPU cost. The batching is not enjoyed to the fullest only by nodes that relay new transactions, but partial batching can be applied there still.

    Can we run on a phone?

    With 10 tx/sec network load and 1KB transactions, user has to download >800 MB per day, spending 14.4 min on a 1MB/sec link. The CPU cost with use of batching is pretty low: just a minute per day.

    On hourly basis it's 34 MB, 36 seconds per download, and 3.6 sec of verification.

    Seems like a ≈1GB per day of traffic is intolerable for consumer device, but pretty feasible for any online shopping cart, merchant's point-of-sale terminal or an accounting software running in a closet of a 5-person firm.

    tx_size = 1000 # bytes
    tx_cpu_cost = 0.002 # sec
    tx_per_sec = 10
    blockchain_bandwidth = tx_per_sec*tx_size
    
    # user
    sync_interval = 24*3600
    blob_size = sync_interval*blockchain_bandwidth
    batched_cpu_cost = ((1/20.0)*tx_per_sec*tx_cpu_cost)*sync_interval
    
    user_speed = 1_000_000 # bytes/sec
    download_time = blob_size / user_speed.to_f
    verification_time = batched_cpu_cost
    
    # user downloads 823.9 MB per day for 14.4 min, verifies for 1.4 min
    puts %{user downloads #{blob_size/(1024*1024.0)} MB per day for #{download_time/60.0} min, verifies for #{verification_time/60.0} min}
    

    We can optimize bandwidth by remembering recent transaction output and referring to them by short index instead of a fully serialized output structure (128 bytes for predicate+anchor+value). Then, a single lane contains not (4+3)*32 bytes of commitments, but only 3*32. On small transactions, r1cs proof would dominate, but on larger, aggregated transactions, we can achieve up to 40% bandwidth savings for realistic aggregation sizes (16x16—32x32).

     2x2 tx:    full size: 1483,  opt size: 1231 (marginal cost per 2 lanes: 1483 or 1231: 16% better)
     4x4 tx:    full size: 2059,  opt size: 1555 (marginal cost per 2 lanes: 1029 or 777:  24% better)
     6x6 tx:    full size: 2609,  opt size: 1853 (marginal cost per 2 lanes:  869 or 617:  28% better)
     8x8 tx:    full size: 3147,  opt size: 2139 (marginal cost per 2 lanes:  786 or 534:  32% better)
     12x12 tx:  full size: 4209,  opt size: 2697 (marginal cost per 2 lanes:  701 or 449:  35% better)
     16x16 tx:  full size: 5259,  opt size: 3243 (marginal cost per 2 lanes:  657 or 405:  38% better)
     24x24 tx:  full size: 7345,  opt size: 4321 (marginal cost per 2 lanes:  612 or 360:  41% better)
     32x32 tx:  full size: 9419,  opt size: 5387 (marginal cost per 2 lanes:  588 or 336:  42% better)
     64x64 tx:  full size: 17675, opt size: 9611 (marginal cost per 2 lanes:  552 or 300:  45% better)
    

    Note: even radical savings with hypothetical vector-commitment for values (32 bytes instead of 64 - not sure if that's even possible w/o wasting bandwidth for more proofs), packing predicate into the value commitment (MW-style) and minimizing VM format overhead to zero, we can push 40% gains to 70% gains, which is roughly "3x with crazy hard-core optimizations VS 2x with simple optimization".

    So, taking simple optimization of 32x32 aggregations and 336 bytes per logical payment (2x2), and a cap of 10 payments per second, the daily bandwidth is 290 MB instead of 800 MB as in our earlier example. To make this runnable on phones we need to trim the throughput 10x more to make the monthly data fit in under 1GB — to just 1 payment per second. That's pretty unrealistic.

    SPV costs

    Given that the light clients (individual mobile devices) are not going to be able to casually run full nodes in background (although, they absolutely can if the user dedicates them to — e.g. in a point-of-sale application), we need a simplified security model for them, aka "simplified payment verification".

    (1) Client only tracks block headers and confirms payment via a merkle proof of its inclusion. The bandwidth depends linearly on the frequency of blocks (every 1 sec in the most loaded case, 3-5 sec more realistically) and not on their contents. For 1000 outputs per block per second, and 1Kb of header data including consensus proofs, the daily bandwidth becomes 84 Mb (≈2.5Gb/month). With 5 second interval (100 payments/sec throughput), it becomes 17 Mb (≈0.500 Gb/month).

    This can be optimized further by using skiplist-style reference to skip over 1024 blocks to bring down the bandwidth by a giant factor.

    (2) SPV clients are likely to be users of payment channels, which requires watching the chain for forced channel settlements. To avoid simply outsourcing watching for such settlements to other parties, we need a more efficient way to query subset of block data per each block, that allows to detect the settlement.

    ZkVM makes output created by a contract fully deterministic, so the user can request merkle paths for a prefix of that (unexpected) utxo ID. Quorum members then can produce those log-sized proofs for all utxos created with such prefix per block. As of proof for "nothing with this prefix at this time" (typical case), it is simply a joint merkle-path for two neighbour items around the prefix, of size log(outputs_per_block)*32 (300 bytes per block) which is 25 Mb/day (1 block/sec) or 5 Mb/day (5 blocks/sec).

    Payment channels normally have large contest period, so the forced-settlement utxo is guaranteed to exist over some number of blocks N — this means the user may request proofs of inclusion only every N/10 blocks to have guaranteed detection of the settlement with at most 10% of contest period wasted.

    opened by oleganza 8
  • zkvm: refactor encoding/decoding

    zkvm: refactor encoding/decoding

    We currently have multiple uses for encoding throughout zkvm, blockchain and p2p crates:

    1. Writing various objects (TxHeader, TxLog, Tx, BlockTx) into merlin::Transcript.
    2. Encoding and decoding contracts within VM from byte strings.
    3. Reading and writing BlockTx to the network sockets.

    Downsides of the current approach:

    1. Writing to the Transcript is usually done with an intermediate encoding of the entire object into a buffer (see zkvm::Contract::id() and blockchain::BlockTx::witness_hash()).
    2. When we write to the wire, we want to use Tokio Codecs with their buffers (bytes::BytesMut/BufMut), but keep the rest of the system independent from Tokio or any I/O framework.

    Requirements

    We need a pair of traits Reader/Writer akin to Buf/BufMut, but with several modifications:

    1. Never panics.
    2. Reads and writes return Result, with associated Error type, defined by the impl.
    3. Read fn checks the remaining length.
    4. Write fn accepts a label of type &'static str that can be used with the Transcript (or JSON ;-)) Binary buffers will simply ignore the label.
    5. We only need u8/u16/u32/u64 (LE) and &[u8] types supported for now. (Keep _le explicit in the naming, though.)

    Organization

    1. Separate readerwriter crate.
    2. Optional dependency on bytes (for interop with Bytes/Buf), with corresponding module bytes_impls.rs enabled on #[cfg(feature="bytes")].
    3. Optional dependency on merlin (for interop with Transcript), with corresponding module merlin_impls.rs enabled on #[cfg(feature="merlin")].
    4. Crate zkvm depends on readerwriter, SliceReader is removed.
    5. In crates zkvm and blockchain: encoding/decoding methods are changed to use Reader/Writer API. Hashing is redefined in the spec to write data to transcript field-by-field.
    6. Crate blockchain depends on readerwriter, but not on p2p.
    7. p2p uses same readerwriter API for its encoding needs.
    opened by oleganza 5
  • zkvm: flexible anchoring of signatures

    zkvm: flexible anchoring of signatures

    Currently, the delegate instruction binds the signature to the entire contract ID, which is in turn bound to the unique anchor and contents. This is the most safe implementation since it prevents replay of the signature on the other instances of the similar contract (due to anchoring), and ties the signature to concrete state of the contract (making it unusable for any alternative state of the contract).

    In some situations we intentionally want more flexibility. For instance, in the payment channels, a contesting signature is better be applicable to arbitrary version of the transaction that initiates the settlement. This means that the signature should still be bound to the anchor and the predicate, but not to the contents (or, at least, not to the balance distributions). This corresponds to the statement "sign this program that checks the sequence number, applicable to any state of the channel between these two parties". In other words, the signature really covers only the public key.

    Ideas:

    1. Allow masking out anchor, or all or parts of payload, as a signature flag or variant of delegate instruction. Should this always be an explicit choice (what to sign), or an opt-out (all things are signed by default, unless selected not to)? If anchor is not signed, the signature is replayable against other instances with the same predicate - requires users to choose unique keys as an offline "nonce" choice.
    2. Always sign the anchor, but allow carrying the anchor over w/o ratcheting and allow optional non-binding of the contents. This is safer than (1) because sig is not replayable over other existing contract instances with the same key, but potentially permits resurrecting utxo IDs as anchors can be reused (although, still not duplicated!).
    3. Do we always need to sign the predicate, or not? Seems like there's no reason to allow not signing the predicate that's used to check the signature - it simply invites risky malleability for not concrete benefit.
    4. Some other way?

    Previous discussion around anchors: https://github.com/stellar/slingshot/pull/193

    opened by oleganza 5
  • blockchain: scalable state proposal

    blockchain: scalable state proposal

    Per discussion in #230, here's a concrete proposal for addressing scalability of the blockchain state:

    • Core node contains "trimmed state" of log₂(utxos) size via utreexo protocol. If it has its own utxos, it needs to track updates to the state and adjust its merkle proofs for its utxos. This means, that all devices w/o bandwidth constraints have low storage overhead and can use core nodes for all their blockchain needs (wallets, issuers, exchanges, point-of-sale terminals, payment processors etc).
    • Thick node encapsulates Core node and stores the full UTXO state and allows non-nodes to simply keep their utxos: then it will provide up-to-date proofs or insert them on the fly.
    • Core nodes can maintain proofs for N most recent utxos so that other nodes can avoid sending the proofs for them. This dramatically improves bandwidth as nodes only need to transmit proofs for older UTXO that are pruned from such limited buffer. N can be chosen on per-connection basis.
    • There are no nonces to store. For non-replayability, every transaction must spend a utxo.
    • Instruction nonce can be removed.
    • To enable issuance of custom assets, blocks mint synthetic nonce-units abbreviated as nits. These can be given away to anyone who needs to issue assets, so they can make their txs non-replayable.
    • Nits are minted at a pre-determinate and capped schedule in order to avoid u64 overflow like for any other assets. Every block is permitted to add an artificial nits-containing utxo to the state with arbitrary predicate, pre-determinate quantity, zero flavor and anchor set to the previous block ID.
    • Minted utxos are remembered with a 48h expiration time in a separate maturity buffer and checked for attempted spending before the maturity time. This is to incentivize placing transactions in a continuous chain of blocks.
    • SPV clients can subscribe to Core nodes to track utxo proofs for them, and receive up-to-date proofs on continuous basis, so they can (1) detect misbehavior early and switch to a different Core node; (2) watch payment channel status and detect forced settlements to be able to contest invalid settlements.
    opened by oleganza 5
  • zkvm: question: how to push predicate to Program?

    zkvm: question: how to push predicate to Program?

    Hello. I am a beginner of Rust. I'm interested in your zkvm and trying to make a simple example that proves two pushed data are equal. I'd like to ask you how to push predicate to the program and set Anchor.

    This is the code I wrote.

    use zkvm;
    use rand::Rng;
    use curve25519_dalek::ristretto::CompressedRistretto;
    use curve25519_dalek::scalar::Scalar;
    use bulletproofs::{BulletproofGens, PedersenGens};
    use merlin::Transcript;
    
    fn bytecode(prog: &zkvm::Program) -> Vec<u8> {
            let mut prog_vec = Vec::new();
            prog.encode(&mut prog_vec);
            prog_vec
    }
    
    fn main() {
        let mut rng = rand::thread_rng();
        let range = 0..31;
        let mut randoms:[u8;32] = [0;32];
        for i in range {
            randoms[i] = rng.gen();
        }
        let ristretto = CompressedRistretto(randoms);
        let verify_key = zkvm::VerificationKey(ristretto);
        let bp_gens = BulletproofGens::new(64, 1);
        let pc_gens = PedersenGens::default();
        let program1 = zkvm::Program::build(|prog:&mut zkvm::Program|{prog.push(123).r#const().push(123).r#const().eq().verify()});
        let pred = zkvm::Predicate::Program(program1.clone());
        let op = pred.prove_program_predicate(&bytecode(&program1));
        let program2 = zkvm::Program::build(|prog:&mut zkvm::Program|{prog.push(0).push(op).nonce()});
        let tx_header = zkvm::TxHeader{version:0,mintime:0,maxtime:9999999999};
        let privkey = Scalar::random(&mut rand::thread_rng());
        let sign_fn = |transcript:&mut Transcript, verification_keys:&Vec<zkvm::VerificationKey>|{
            let mut trans = transcript;
            zkvm::Signature::sign_single(&mut trans, privkey)
        };
        let (tx,_,_) = zkvm::Prover::build_tx(program2,tx_header,&bp_gens,sign_fn).unwrap();
        let verified = zkvm::Verifier::verify_tx(tx,&bp_gens).unwrap();
    }
    

    Running this code, I got error message, "the trait std::convert::From<zkvm::point_ops::PointOp> is not implemented for zkvm::scalar_witness::ScalarWitness". As far as I saw Instructions in the ZkVM specification, however, no opcode can push predicates or PointOp of predicates to the stack.

    I would appreciate it if you could share how to push predicate to the program and set Anchor.

    question 
    opened by SoraSuegami 5
  • zkvm: encrypted values and data instead of reblinding protocol

    zkvm: encrypted values and data instead of reblinding protocol

    Problem

    Blinding/Reblinding protocol in ZkVM solves the problem of (roughly speaking) "communicating the blinding factor to a counter-party". The problem is still relevant, but the solution is sub-optimal:

    1. Values become "stateful" — their variables for qty/flavor have to keep commitments inside the VM to make them "replaceable" (re-blindeable).
    2. This makes the runtime state more complicated and requires tracking whether the variable is "attached" or "detached".
    3. We cannot use PortableItem and Value types in the Output: instead we have another set of types FrozenItem and FrozenValue.
    4. When the commitment is blinded with blind instruction, the user is supposed to copy-paste a part of the proof into the reblind instruction to "remind the VM" that a certain commitment has a required shape. This is an unnecessary overhead that may be eliminated if the VM encodes the valid transformation in the type system.

    Suggestion

    Instead of focusing on commitments, introduce EncryptedValue and EncryptedData that represent valid ElGamal encryptions of a Value type and Data type respectively.

    Then, we can remove FrozenItem/FrozenValue types, use simple commitments within Value type, removing variable_commitment vector and concepts of "attached" and "detached" variable state from the VM.

    Maybe the decryption proof can be made simpler since we'll be using a more suitable structure (elgamal encryption) within the new "encrypted" types.

    TBD: should we allow individual encryption of qty/flavor, e.g. what if flavor is statically predetermined. Or maybe we can share the encryption nonce among both of those?

    Alternative

    If there's efficiency gain from encrypting a lot of commitments at once to a single key, we can encrypt the entire contract!

    1. Make Variable(Commitment) a portable type.
    2. Encrypt a list of items, applying it to Value and Variable and skipping Data items.
    3. Keep additional elgamal encryption data inside a separate type EncryptedContract, or per-item within EncryptedValue or EncryptedVariable.
    opened by oleganza 5
  • [question]: how to use the `borrow` instruction

    [question]: how to use the `borrow` instruction

    Hi,

    I am trying to figure out how to use the borrow instruction.

    Based on the code of zkvm/tests/zkvm.rs, I added a simple(failed) test case as follows

    #[test]
    fn borrow_output() {
        //inputs 10 units, borrows 5 units, outputs two (5 units)
        let flv = Scalar::from(1u64);
        let (preds, scalars) = generate_predicates(3);
        let borrow_prog = Program::build(|p| {
            p.input_helper(10, flv, preds[1].clone()) // stack: Value(10,1)
                .push(Commitment::blinded(5u64))      // stack: Value(10,1), qty(5)
                .var()                                // stack: Value(10,1), qty-var(5)
                .push(Commitment::blinded(flv))       // stack: Value(10,1), qty-var(5),   flv(1)
                .var()                                // stack: Value(10,1), qty-var(5),   flv-var(1)
                .borrow()                             // stack: Value(10,1), Value(-5, 1), Value(5,1)
                .output_helper(preds[0].clone())      // stack: Value(10,1), Value(-5, 1); outputs (5,1)
                .cloak_helper(2, vec![(5u64, flv)])   // stack:  Value(5,1)
                .output_helper(preds[2].clone())      // outputs (5,1)
        });
        build_and_verify(borrow_prog, &vec![scalars[1].clone()]).unwrap();
    }
    

    cargo test outputs

    thread 'xxxx' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidR1CSProof'
    

    Where did I make a mistake?

    Thanks a lot!

    opened by dantengsky 4
  • keytree: serialization for xpub, xprv

    keytree: serialization for xpub, xprv

    For both Xprv and Xpub types:

    • ::to_bytes(&self) -> [u8; 64]
    • ::from_bytes(&[u8]) -> Option<Self> (checking that slice is exactly 64-byte-long)

    For later: implement Serde on top of these (example)

    Note: the from_bytes won't benefit from using fixed-length array ([u8; 64]) because at least Xprv would still have to check if scalar is canonically encoded. So we might as well make decoding of all of these failable and take byteslices to let user not to make extra 64-byte copies.

    opened by oleganza 4
  • ZkVM: Spec Questions

    ZkVM: Spec Questions

    https://github.com/interstellar/slingshot/blob/main/zkvm/spec/ZkVM.md#aggregated-signature

    "Commit all verification keys P[i] in order, one by one"

    Would this be in lexographical order?

    Can the terms clear-text scalars and non-secret scalars be used interchangeably?

    The term "Flavour" was defined in the Cloak protocol, should we copy over the definition for the specs?

    opened by kevaundray 4
  • zkvm: return () instead of `&mut Program` from the closure in `Program::build`

    zkvm: return () instead of `&mut Program` from the closure in `Program::build`

    Here's a real use with an awkward ending line:

    let program = zkvm::Program::build(|p| {
        for stored_utxo in spent_utxos.iter() {
            p.push(stored_utxo.contract());
            p.input();
            p.sign_tx();
        }
    
        let pmnt = payment_receiver.blinded_value();
        p.push(pmnt.qty);
        p.push(pmnt.flv);
    
        p.cloak(
            spent_utxos.len(),
            maybe_change_receiver_witness.as_ref().map(|_| 2).unwrap_or(1),
        );
    
        p.push(payment_receiver.predicate());
        p.output(1);
    
        // TBD: change the API to not require return of the `&mut program` from the closure.
        p
    });
    
    opened by oleganza 3
  • Feature Request: We need example for how to write contract with zkvm

    Feature Request: We need example for how to write contract with zkvm

    What problem does your feature solve?

    Let other developers learn how to program on zkvm

    What would you like to see?

    A comprehensive contract demo of zkvm

    What alternatives are there?

    opened by howjmay 0
  • Example website seems to be broken

    Example website seems to be broken

    What version are you using?

    N/A

    What did you do?

    Browsed to https://zkvm-demo.stellar.org/

    What did you expect to see?

    Not an error.

    What did you see instead?

    Rocket error page 503.

    bug 
    opened by felixwatts 1
  • zkvm: constant-size timeouts for multi-hop payment channels

    zkvm: constant-size timeouts for multi-hop payment channels

    Credits

    This is a ZkVM-specific implementation of the idea originally described in Sprites paper by Andrew Miller, Iddo Bentov, Ranjit Kumaresan, Christopher Cordi and Patrick McCorry (2017): https://arxiv.org/pdf/1702.05812.pdf

    Intro

    Lightning Network uses HTLCs ("hash+timelock contracts") to ensure atomic update of balances across multiple ledgers, where ledgers are private to each peer. The purpose of HTLC is to guarantee that "I send 1 coin only if I receive 1 coin". First, all nodes along a multi-hop route enter those HTLC contracts one-by-one starting with a sender. Then, starting with the recipient, they share the preimage. Once a node knows a preimage, it has assurance that both incoming and outgoing HTLCs can be resolved, so it can independently upgrade both contracts to unconditional state (with HTLC condition effectively stripped off).

    Problem

    For each node along a multi-hope route, there must be a safe difference between timeouts for outgoing HTLC and incoming HTLC: if the outgoing one is resolved with a preimage before some time T, there must be T+delta timeout to resolve the incoming HTLC. Delta is typically many hours (e.g. 12 hours) to make sure closing transaction can be published before the money is reverted. Likewise, for failure case: incoming HTLC must not fail until outgoing HTLC fails.

            A  ---------> B ---------> C ---------> D
    
    HTLC:        36 hr        24 hr        12 hr
    

    This means, that N-hop route has N*delta maximum timeout on the side of the sender. Meaning, that the sender faces multi-day funds lock up in case the payment failed. This makes large distances too risky in terms of time value of capital wasted, which in turn increases amount of capital required to be locked up by each node to ensure good connectivity with short routes.

    For context, a realistic network with minimal capital overhead requires log(N) hops on average, so ≈20 hops for 1 million nodes. It's a lose-lose scenario.

    Abstract definition of O(1) HTLC

    How do we replace O(n) timeout for n hops with O(1)? We need to guarantee that the use of the unlock beacon ("preimage is revealed" in classic HTLCs) is split in two parts instead of being packaged as one:

    1. reveal of the beacon before T1
    2. use of the beacon before T2

    This ensures that if the beacon is not revealed by T1, it cannot possibly be used by anyone at any point in the route. But if it is revealed by anyone, there's T2-T1 extra time for everyone to use it for resolution of their own contracts, simultaneously.

    There are two more requirements:

    1. Anyone should be able to create a beacon once the preimage is known.
    2. Anyone should be able to use anyone's beacon once it's revealed.

    The first requirement provides security after the preimage is cooperatively disclosed by the recipient until nodes re-sign their payment channels with HTLC condition removed. If you know you can create the beacon anytime, you are safe to sign-off an unconditional outgoing payment before getting the incoming one signed-off.

    The second requirement is key to O(1) timeout: if anyone succeeded at revealing the beacon B' right before T1, it may be too late for anyone else to create their own B'', so they should be able to use B' as-is and complete the incoming HTLC before T2 (when the rollback would be allowed).

    ZkVM implementation

    To implement such HTLC, we need to wrap it into an issuable asset (insert NFT joke here). Flavor ID is defined by the issuance program, which check the tx.maxtime against the global timeout (T1) and a hash preimage. The payment channels replace HTLC condition with a "proof of utxo existence" for the asset with such ID.

    • If nodes propagate preimages normally, then all nodes know that they can issue such token in the same transaction used to close the channel (merkle path consists of simply the contract ID at that point). No one needs to actually issue a token if everyone cooperates to update balances with HTLC conditions removed.
    • If some node successfully issues such token right before T1 expires, other nodes can observe that and use Utreexo merkle path to it as a way to resolve their HTLCs before timeout T2.
    • If by time T1 no one issued such token, then it's not issuable anymore and everyone safely cancels HTLCs after T2.

    The solution yields constant 2-interval timeout that scales to any number of hops (e.g. 24 hours, if we assume 12-hour interval necessary for reacting to on-chain events). This is equivalent overhead to traditional 2-hop routes and strictly better for more hops.

    Constant-sized timeout enables network to safely use minimal capital lockup with a binary tree topology, where each node has at most 3 channels: 2 "down" and 1 "up", with log2(n) hops required to reach any node.

    opened by oleganza 0
  • zkvm: payment channels and HTLC

    zkvm: payment channels and HTLC

    1. Payment channel

    Simple bilateral payment channel.

    Alice and Bob bring amounts of some asset into a channel and periodically update the distribution of assets. Distribution is a signed predicate that re-locks the funds under a relative timeout. Within the timeout, a newer signed distribution could be applied to replace the stale one.

    This could be one-way channel (Alice->Bob, only Alice brings funds), or it could be a multi-asset channel (Alice brings USD, Bob brings EUR), or a set of multiple currencies that FX traders move around between each other.

    The system trivially extends to N-party channel with N-of-N signatures updating the balance distribution.

    Setup

    Alice and Bob exchange pubkeys A and B used in the operation of the channel and establish a joint MuSig key AB = A & B.

    Alice and Bob sign a balance update with seq=1 reflecting initial balances.

    Now, it's safe to lock funds. They compute the initial predicate P_init that transiently re-formats the contract in a format compatible with P_exit predicate used to manage exits:

    P_init = AB + program {
         // transient contract:
         contract(payload {
            assets, 
            seq=0, 
            timeout=MAX_INT, 
            "" (empty prog),
            tag
          ) -> P_exit
    }
    
    P_exit = AB + program {
       verify(tx.mintime > self.timeout)
       eval(self.redistribution_program)
    }
    

    Balance update

    Alice and Bob agree to a new distribution of funds and sign a new predicate that can be used to override any prior state of the contract. The program is signed with P_exit multikey.

    program($seq, $tx, $new_distribution) = {
        verify($seq > self.seq)
        self.redistribution_program = $new_distribution
        self.timeout = $tx.maxtime+T
        lock(self, P_exit)
    }
    

    Example of a re-distribution program:

    $new_distribution = program {
        output($90 -> Alice)
        output($10 -> Bob)
    }
    

    Settlement

    Both parties sign a tx that re-distributes funds bypassing the contract logic. The resulting tx looks like a simple transfer w/o any details leaking.

    Initiate forced settlement

    Alice wants to close the channel because Bob does not cooperate.

    Alice forms a transaction Tx1 that opens P_init into programmatic branch that re-locks assets under a transient contract with necessary parameters under P_exit.

    Within the same Tx1 Alice spends P_exit with a signed program performing the latest balance update. Tx1 leaves assets locked in the last-agreed proportion under the same predicate P_exit, but withdrawable after timeout.

    After timeout, Alice signs Tx2 that pays up-to-date fee and opens up P_exit's programmatic branch that releases the assets to pre-arranged destinations to Alice and Bob.

    Contest forced settlement

    If Bob observes that Alice initiate channel close with a stale signed predicate (seq number less than the latest they agreed upon).

    Privacy considerations

    Financial values (asset types and quantities) remain encrypted at all times, in cooperative and non-cooperative cases.

    In case of cooperation, there are two transactions: funding and settlement. Both look like regular single-key spends.

    In case of non-cooperation, channel contract is published. But the sequence numbers and timeout durations are kept encrypted, so the channel software or participants can't be fingerprinted or estimate their transaction volume.

    Simplicity of the scheme allows outsourcing monitoring of the blockchain to semi-trusted servers that will bump the state if they notice a stale exit. Confidentiality of the transfers protects against extorsion and permits using flat-fee to support such watch servers.

    2. Multi-hop channels (lightning)

    If a simple channel has unconditional balance updates (extra checks are only for replacing stale versions), multi-hop channel needs to update the balances conditionally: "Bob sends $5 to Carol only if he gets $5 from Alice".

    The conditional updates use HTLCs (hash-timelocked contracts) to provide reversal after an absolute timeout (relative won't do). To keep channels open indenfinitely, HTLC-locked balances must be replaced with non-HTLC updates as in simple payment channels when the payment is guaranteed (preimage for HTLC is provided).

    This means that HTLC condition is wrapped into a payment channel condition:

    1. first, we establish which version of agreement is the latest (payment channel protocol),
    2. then we deal with its extra conditions (if HTLC is on), or just stay put (if it's unconditional update after HTLC is opened).

    This works same as above, but intermediate HTLC-locked conditions looks like this:

    $new_distribution = program {
        self.timeout = tx.maxtime
        lock with taproot {
           branch1(): {
               verify(tx.mintime >= (self.timeout + delta));
               output($90 -> Alice)
               output($10 -> Bob)
           },
           branch2($preimage): {
               verify(sha256($preimage) == self.htlc);
               output($85 -> Alice)
               output($15 -> Bob) // Alice is sending $5 through Bob
           },      
        }
    }
    
    opened by oleganza 0
  • ZkVM rollup idea

    ZkVM rollup idea

    ZkVM Rollup

    TL;DR

    Instead of implementing custom centralized "trade chain" both in normal software and in an expensive L1 smart contract, we reuse the same ZkVM format that permits stateless verification and invoke "recursive" verification from within outer ZkVM chain, reusing conventional implementation in Rust and having none of virtualization overhead. The cost is that of verifying one more regular transaction + a checking a few hashes along some merkle trees such as utreexo snapshots.

    Motivation

    Blockchains are great at trust-minimization at a cost of performance, while centralized trading systems are more scalable and performant. For instance, offers in the order book need to be updated rapidly and always ready to be consumed by any of many players involved.

    Several Ethereum projects explored schemes known as rollups, where a central party organizes the marketplace and commits to its rules. Users' funds are locked up with an on-chain contract that knows the rules and can check violation of them. In case of a violation, any user can submit a proof of it and begin the process of returning their deposit.

    The transcript of the game, or trade chain, is very similar to a blockchain: individually signed transactions that modify shared state. Except the consensus is operated by centralized signer and their actions can be disputed on a higher-order blockchain (main chain).

    Kinds of violations

    There are three kinds of violations to deal with in a rollup system:

    1. censorship,
    2. invalid transition,
    3. fork of the state.

    Censorship is mitigated by the ability to withdraw funds directly from the smart contract at the latest state of the trade chain. In such case, the withdrawal is timelocked to ensure that the funds were not consumed from that account.

    Invalid transaction attack is mitigated by being able to "bisect" to the point in the trade chain history where violation took place. When such location is proven, all further transactions are ignored (rolled back). This should not hurt traders who validate trade chain directly and halt trading when violation is detected.

    Fork attack occurs when the operator signs two incompatible, but independently valid histories and it's impossible to objectively prove which one is the correct one. One mitigation is to commit the state of the history early and often, as frequent as every block of the host chain. This reduces the window of profit opportunity: when an operator could make a risky bet, lose, and attempt to roll back the chain by forking it to the point before the bet was placed.

    To make such attacks expensive, operator places a bond on host chain that gets destroyed in case operator forks the chain. The size of the bond is bounded from below by the trade volume between the commits and above by the cost of capital vis-a-vis the transaction fees collected by the operator. Also, the attack scope is limited to a one-time event, after which everyone stops trusting the operator and looks for recourse in the legal framework outside the blockchain.

    Kinds of rollups

    Optimistic Rollup: snapshots of the trade chain state are optimistically committed to the main chain and can be challenged within certain timeframe. All funds are controlled by a smart contract on main chain that is capable of checking the above violations. In case of Ethereum, a EVM bytecode needs to implement all rules of the trade chain so it can execute the minimal state transition and check whether it's valid (no need to verify the entire chain).

    ZK Rollup: snapshots of the trade chain state are committed to the main chain with the zkSNARK proof of correct or incorrect evaluation. In Ethereum, a EVM smart contract must implement SNARK verification logic. Important difference is that SNARK could pack up verification of all transactions between the checkpoints and apply updates unconditionally, without timeouts to watch for violations. However, contest mechanism is still required to be able to prove fork.

    ZkVM rollup

    In both cases above, the rules of the trade chain must be encoded using the smart contract language: either directly (optimistic rollup), or indirectly through SNARK circuit (zkrollup).

    ZkVM offers a third opportunity: ZkVM may implement one or two instructions for recursive validation of embedded ZkVM transactions. Instead of re-implementing ZkVM in its own language, we could simply instantiate another VM instance and run a trade chain transaction through it. This is possible because ZkVM has a well-defined compact state, stateless tx validation that is sufficiently lightweight for a wide range of "trade chain" applications.

    The cost of contesting a violation of a ZkVM trade chain is simply a cost of one more ZkVM transaction: it is easy to account for, it does not require new tooling for the language and allows keeping the smart contract language non-generic and high-level, unlike EVM.

    EVM permits implementation of arbitrary trade chain state machines, while ZkVM would only be able to validate the ZkVM rules. How do we implement custom trade chain logic then? The answer is: we can do it using both the mainchain and trade chain contracts. All the individual or bilateral rules could be specified in the trade chain contracts, while global rules (such as "offers must be fullfilled in correct order") could be verified for violation in the outer, main chain contract.

    Trading engine example

    Terminology

    Parties to the protocol are: one operator and variable number of traders. Operator could be secured with a multi-signature / multi-party setup, but from the perspective of the traders it is still a single logical entity.

    Protocol is executed over a main chain and a trade chain. Trade chain is timestamped by the operator on the main chain, where all the deposited funds and a trade chain state are secured in a contract. Contract implements the key part of the protocol: enabling transition from one state to another and dispute of the invalid or conflicting transitions.

    Roles

    Operator coordinates deposit of traders' funds in a global smart contract on main chain.

    Traders can deposit and withdraw funds.

    Traders place orders and pay required fee to the operator. Operator includes transactions that pay agreed-upon fees. It is possible to have different confidential fee agreements for different traders, depending on volume/load/liquidity.

    Operator matches orders in correct order (better price first) and makes regular checkpoints.

    Traders watch for correctness and submit dispute proofs on main chain in case a fork or invalid tx is detected. Traders also halt immediately when violation is detected.

    Note: some of the following actions, in their entirety, or portions of them, could be implemented once and for all as a ZkVM facility. Other parts are specific to the rules of trade chain and must be specified as custom ZkVM contract clauses.

    Action 1: deposit

    Operator merges the existing contract with the deposited funds, making them available on the trade chain. We need an instruction to "import" asset to an embedded chain.

    Action 2: withdrawal

    Trader may request a withdrawal of funds from the trade chain. This request necessarily must be delayed to make sure there was not uncommitted transaction that spends the same funds. At the same time, the request must

    Action 3: forced withdrawal

    Trader may withdraw funds w/o cooperation of the operator. This is a slower process, but a necessary insurance against censorship.

    1. The withdrawal request is done against the global contract that holds all the deposits of the tradechain.
    2. The request is delayed to make sure there was not uncommitted transaction that spends the same funds.
    3. At the same time, the request forces the operator to mark the utxo as unspendable: if they were to allow transaction spending it, they'd be violating the rules.

    Action 3: order placement

    Within the trade chain, a passive asset can be locked in an "order contract" that has a cleartext price and provision to automatically match against counter-offer with compatible price.

    Action 4: order matching

    Operator keeps track of the non-matched orders sorted by price. As new orders overlapping with the order book come in, they are immediately matched and new transaction is recorded on trade chain.

    Every matched trade commits to a structure of the order book which is used in the main chain contract to check for violations such as skipping over an order (see below).

    Action 5: checkpoint

    Operator periodically bundles transactions in a block that's committed into the "checkpoint" on main chain: transition of the main contract to a new state. Checkpoint transition also includes newly deposited funds that get reflected in the utxo set of the trade chain.

    Action 6: withdrawal dispute

    When the trader initiates force-withdrawal, they may incorrectly claim a utxo that they have already spent. In such case, within a timeout (several main chain blocks), a spending trade chain transaction could be presented that consumes that utxo.

    It does not matter whether it is spent in a correct trade chain because only the owner of the utxo could claim the withdrawal, and if they do so twice, they are clearly at fault and the higher-order request can be cancelled.

    Note: this requires careful domain-separation of signatures and contract IDs in order to work correctly across arbitrary depth of inception.

    Action 7: invalid checkpoint dispute

    Every checkpoint update by the operator is an opportunity to bring an invalid state:

    1. invalid utreexo updates
    2. invalid transactions
    3. allowing spending utxos already destroyed during withdrawal
    4. violating custom rules, e.g. regarding correct ordering of bids/asks.

    When a proof of violation is presented, the contract enters the locked mode at a specified state, which can only be backtracked to an even earlier invalid state, but not further.

    Action 8: forked state dispute

    Similar to the invalid state, this is a presentation of the forked state in-between checkpoints that advances the checkpoint to the block before the fork.

    Required modifications to ZkVM

    The above protocol requires additions to the ZkVM instruction set and VM semantics to support nested chain validation and fraud proofs.

    • Generic merkle proof computation support
    • Nested tx, block and utreexo verification
    • Block signer support for flexible transactions (for resolving commutative multiuser accesses to shared contracts)
    • Correct domain-separation of the tx IDs, contract IDs, tx signatures between chains. We probably need to keep asset IDs the same, but issuances domain-separated between chain IDs.

    Problem 1: high-level instructions for embedded chain

    In fact, if we assume that ZkVM is enough for any trade chain instantiation, it may be more closely integrated with high-level "import", "export" and "fraud proof" instructions, as long as they are composable with the additional custom global-level rules (such as correct ordering of offers).

    Problem 2: race conditions with multiplayer contracts

    The Ethereum/EVM model assumes global state and permits failed txs that pay fees nonetheless. This allows submitting commutative transactions in arbitrary order w/o coordination. If multiple independent users withdraw their own funds from a shared contract, race conditions may be avoided and transactions applied in different order would yield the same, correct state (or, at least, the equivalent one). However, a UTXO model and most cryptographic accumulator schemes require users to coordinate access to the shared resource, even if without the trust. This may open DoS attack vectors and complicates usage.

    Specifically in ZkVM, the "trade chain" accumulator is embedded in a utxo that has to be destroyed by a single transaction, immediately causing a race condition. The alternative is to provide a kind of contract/utxo that allows a provably commutative access (similar to Utreexo access). In fact, every contract may be represented not as plain Contract ID hash, but as a merkle root of any number of independent contracts. Coupled with the notion of embedded chains, this could make a perfect match:

    1. all multiplayer apps have to implement some sort of embedded ledger and need out-of-order access,
    2. that embedded ledger might as well have zkvm format,
    3. zkvm has utreexo state with random-order access (in-between checkpoints),
    4. therefore, all multiplayer contracts might be modelled as embedded utreexo states of the embedded chains.

    This simple Utreexo structure, though, is not extensible. Order book, for instance, is a custom rule that orders bids by cleartext price which Utreexo does not do. Expressing that requires a mechanism for deploying some provably-CRDT programs.

    opened by oleganza 2
Owner
Stellar
Stellar
Source code of Ferrocene, safety-critical Rust toolchain

Ferrocene is a toolchain to enable the use of the Rust programming language in safety-critical environments. It is a proper downstream of the main Rus

Ferrocene 530 Oct 7, 2023
Elemental System Designs is an open source project to document system architecture design of popular apps and open source projects that we want to study

Elemental System Designs is an open source project to document system architecture design of popular apps and open source projects that we want to study

Jason Shin 9 Apr 10, 2022
Integra8 rust integration test framework Rust with a focus on productivity, extensibility, and speed.

integra8 Integra8 rust integration test framework Rust with a focus on productivity, extensibility, and speed. | This repo is in a "work in progress"

exceptional 3 Sep 26, 2022
Mindful Time Tracking: Simplify Your Focus and Boost Productivity Effortlessly.

Mindful Time Tracking: Simplify Your Focus and Boost Productivity Effortlessly. About pace is a mindful productivity tool designed to help you keep tr

pace 6 Mar 1, 2024
Simple Event-Driven Microservice Architecture in Rust

Simple Event-Driven Microservice Architecture in Rust is an open-source project showcasing the simplicity and efficiency of building microservices in Rust. This minimalistic project demonstrates an e-commerce backend system, featuring just two microservices: Catalog and Order.

James Mallon 4 Dec 19, 2023
A tray application for Windows that gives you push notifications and instant downloads of new posts, messages and stories posted by models you subscribe to on Onlyfans.

OF-notifier A tray application for Windows that gives you push notifications and instant downloads of new posts, messages and stories posted by models

Gentlemen Mercenary 10 Dec 20, 2022
This blog provides detailed status updates and useful information about Theseus OS and its development

The Theseus OS Blog This blog provides detailed status updates and useful information about Theseus OS and its development. Attribution This blog was

Theseus OS 1 Apr 14, 2022
The source code that accompanies Hands-on Rust: Effective Learning through 2D Game Development and Play by Herbert Wolverson

Hands-on Rust Source Code This repository contains the source code for the examples found in Hands-on Rust. These are also available from my publisher

Herbert 261 Dec 14, 2022
My create new project simply (originaly in bash), in rust !

CNPS-Rust CNPS (Create new project simply) is a powerful tool built in Rust that simplifies the process of creating projects in various programming la

Asteroidus 3 Jul 30, 2023
Moving to the new Arbiter framework to test Portfolio.

Stable Pool Simulation The simulation in this repository is intended to demonstrate a basic simulation created with the Arbiter framework. To do so, w

Primitive 6 Sep 15, 2023
A procedural macro to generate a new function implementation for your struct.

Impl New ?? A procedural macro to generate a new function implementation for your struct. ?? Add to your project Add this to your Cargo.toml: [depende

Mohammed Alotaibi 4 Sep 8, 2023
An inquiry into nondogmatic software development. An experiment showing double performance of the code running on JVM comparing to equivalent native C code.

java-2-times-faster-than-c An experiment showing double performance of the code running on JVM comparing to equivalent native C code ⚠️ The title of t

xemantic 49 Aug 14, 2022
Rust development environment for MIPS on NT4

Summary This is a project which allows us to run Rust "shellcode" in a MIPS environment on NT 4.0. TL;DR Setup NT Install NT 4.0 MIPS in QEMU using th

null 19 Dec 18, 2022
The ray tracer challenge in rust - Repository to follow my development of "The Raytracer Challenge" book by Jamis Buck in the language Rust

The Ray Tracer Challenge This repository contains all the code written, while step by implementing Ray Tracer, based on the book "The Ray Tracer Chall

Jakob Westhoff 54 Dec 25, 2022
Code to follow along the "Zero To Production" book on API development in Rust.

Zero To Production / Code (Chapter 10 - Part 1) Zero To Production In Rust is an opinionated introduction to backend development using Rust. This repo

Luca Palmieri 2.8k Dec 31, 2022
Game development practices with Rust programming language. I want to use different crates for this.

Hazır Oyun Motorlarını Kullanarak Rust Dili Yardımıyla Oyunlar Geliştirmek Rust programlama dilinde oyun geliştirmek için popüler birkaç hazır çatıyı

Burak Selim Senyurt 16 Dec 27, 2022
A collection of crates to make minecraft development (client, server) with rust possible.

rust-craft rust-craft is a collection of crates to make minecraft development (client, server) with rust possible. Motivation There's no better way of

João Victor 15 Mar 23, 2023
An API for getting questions from http://either.io implemented fully in Rust, using reqwest and some regex magic. Provides asynchronous and blocking clients respectively.

eithers_rust An API for getting questions from http://either.io implemented fully in Rust, using reqwest and some regex magic. Provides asynchronous a

null 2 Oct 24, 2021