A STARK prover and verifier for arbitrary computations.

Related tags

Math winterfell
Overview

Winterfell 🐺

A STARK prover and verifier for arbitrary computations.

WARNING: This is a research project. It has not been audited and may contain bugs and security flaws. This implementation is NOT ready for production use.

Overview

A STARK is a novel proof-of-computation scheme to create efficiently verifiable proofs of the correct execution of a computation. The scheme was developed by Eli Ben-Sasson, Michael Riabzev et al. at Technion - Israel Institute of Technology. STARKs do not require an initial trusted setup, and rely on very few cryptographic assumptions. See references for more info.

The aim of this project is to build a feature-rich, easy to use, and highly performant STARK prover which can generate integrity proofs for very large computations. STARK proof generation process is massively parallelizable, however, it also requires lots of RAM. For very large computations, amount of RAM available on a single machine may not be sufficient to efficiently generate a proof. Therefore, our final goal is to efficiently distribute proof generation across many machines.

Status and features

Winterfell is a fully-functional, multi-threaded STARK prover and verifier with the following nice properties:

A simple interface. This library provides a relatively simple interface for describing general computations. See usage for a quick tutorial, air crate for the description of the interface, and examples crate for a few real-world examples.

Multi-threaded proof generation. When compiled with concurrent feature enabled, the proof generation process will run in multiple threads. The library also supports concurrent construction of execution trace tables. The performance section showcases the benefits of multi-threading.

Configurable fields. Both the base and the extension field for proof generation can be chosen dynamically. This simplifies fine-tuning of proof generation for specific performance and security targets. See math crate for description of currently available fields.

Configurable hash functions. The library allows dynamic selection of hash functions used in the STARK protocol. Currently, BLAKE3 and SHA3 hash functions are supported, and support for arithmetization-friendly hash function (e.g. Rescue) is planned.

WebAssembly support. The library is written in pure Rust and can be compiled to WebAssembly. The std standard library is enabled as feature by default for both prover and verifier crates. For WASM targets, one can disable default features and compile with the alloc feature instead.

Planned features

Over time, we hope extend the library with additional features:

Distributed prover. Distributed proof generation is the main priority of this project, and we hope to release an update containing it soon.

Perfect zero-knowledge. The current implementation provides succinct proofs but NOT perfect zero-knowledge. This means that, in its current form, the library may not be suitable for use cases where proofs must not leak any info about secret inputs.

Project structure

The project is organized into several crates like so:

Crate Description
examples Contains examples of generating/verifying proofs for several toy and real-world computations.
prover Contains an implementation of a STARK prover which can be used to generate computational integrity proofs.
verifier Contains an implementation of a STARK verifier which can verify proofs generated by the Winterfell prover.
winterfell Re-exports prover and verifier crates as a single create for simplified dependency management.
air Contains components needed to describe arbitrary computations in a STARK-specific format.
fri Contains implementation of a FRI prover and verifier. These are used internally by the STARK prover and verifier.
math Contains modules with math operations needed in STARK proof generation/verification. These include: finite field arithmetic, polynomial arithmetic, and FFTs.
crypto Contains modules with cryptographic operations needed in STARK proof generation/verification. Specifically: hash functions and Merkle trees.
utils Contains a set of utility traits, functions, and macros used throughout the library.

Usage

Generating STARK proofs for a computation is a relatively complicated process. This library aims to abstract away most of the complexity, however, the users are still expected to provide descriptions of their computations in a STARK-specific format. This format is called algebraic intermediate representation, or AIR, for short.

This library contains several higher-level constructs which make defining AIRs for computations a little easier, and there are also examples of AIRs for several computations available in the examples crate. However, the best way to understand the STARK proof generation process is to go through a trivial example from start to finish.

First, we'll need to pick a computation for which we'll be generating and verifying STARK proofs. To keep things simple, we'll use the following:

use winterfell::math::{fields::f128::BaseElement, FieldElement};

fn do_work(start: BaseElement, n: usize) -> BaseElement {
    let mut result = start;
    for _ in 1..n {
        result = result.exp(3) + BaseElement::new(42);
    }
    result
}

This computation starts with an element in a finite field and then, for the specified number of steps, cubes the element and adds value 42 to it.

Suppose, we run this computation for a million steps and get some result. Using STARKs we can prove that we did the work correctly without requiring any verifying party to re-execute the computation. Here is how to do it.

First, we need to define an execution trace for our computation. This trace should capture the state of the computation at every step of its execution. In our case, the trace is just a single column of intermediate values after each execution of the loop. For example, if we start with value 3 and run the computation for 1,048,576 (same as 220) steps, the execution trace will look like this:

Step State
0 3
1 69
2 328551
3 35465687262668193
4 237280320818395402166933071684267763523
...
1,048,575 247770943907079986105389697876176586605

To record the trace, we'll use the ExecutionTrace struct provided by the library. The function below, is just a modified version of the do_work() function which records every intermediate state of the computation in the ExecutionTrace struct:

use winterfell::{
    math::{fields::f128::BaseElement, FieldElement},
    ExecutionTrace,
};

pub fn build_do_work_trace(start: BaseElement, n: usize) -> ExecutionTrace {
    // Instantiate the trace with a given width and length; this will allocate all
    // required memory for the trace
    let trace_width = 1;
    let mut trace = ExecutionTrace::new(trace_width, n);

    // Fill the trace with data; the first closure initializes the first state of the
    // computation; the second closure computes the next state of the computation based
    // on its current state.
    trace.fill(
        |state| {
            state[0] = start;
        },
        |_, state| {
            state[0] = state[0].exp(3u32.into()) + BaseElement::new(42);
        },
    );

    trace
}

Next, we need to define algebraic intermediate representation (AIR) for our computation. This process is usually called arithmetization. We do this by implementing the Air trait. At the high level, the code below does three things:

  1. Defines what the public inputs for our computation should look like. These inputs are called "public" because they must be known to both, the prover and the verifier.
  2. Defines a transition function with a single transition constraint. This transition constraint must evaluate to zero for all valid state transitions, and to non-zero for any invalid state transition. The degree of this constraint is 3 (see more about constraint degrees here).
  3. Define two assertions against an execution trace of our computation. These assertions tie a specific set of public inputs to a specific execution trace (see more about assertions here).

For more information about arithmetization see air crate, and here is the actual code:

use winterfell::{
    math::{fields::f128::BaseElement, FieldElement},
    Air, AirContext, Assertion, ByteWriter, EvaluationFrame, ProofOptions, Serializable,
    TraceInfo, TransitionConstraintDegree,
};

// Public inputs for our computation will consist of the starting value and the end result.
pub struct PublicInputs {
    start: BaseElement,
    result: BaseElement,
}

// We need to describe how public inputs can be converted to bytes.
impl Serializable for PublicInputs {
    fn write_into(&self, target: &mut W) {
        target.write(self.start);
        target.write(self.result);
    }
}

// For a specific instance of our computation, we'll keep track of the public inputs and
// the computation's context which we'll build in the constructor. The context is used
// internally by the Winterfell prover/verifier when interpreting this AIR.
pub struct WorkAir {
    context: AirContext<BaseElement>,
    start: BaseElement,
    result: BaseElement,
}

impl Air for WorkAir {
    // First, we'll specify which finite field to use for our computation, and also how
    // the public inputs must look like.
    type BaseElement = BaseElement;
    type PublicInputs = PublicInputs;

    // Here, we'll construct a new instance of our computation which is defined by 3 parameters:
    // starting value, number of steps, and the end result. Another way to think about it is that
    // an instance of our computation is a specific invocation of the do_work() function.
    fn new(trace_info: TraceInfo, pub_inputs: PublicInputs, options: ProofOptions) -> Self {
        // our execution trace should have only one column.
        assert_eq!(1, trace_info.width());

        // Our computation requires a single transition constraint. The constraint itself
        // is defined in the evaluate_transition() method below, but here we need to specify
        // the expected degree of the constraint. If the expected and actual degrees of the
        // constraints don't match, an error will be thrown in the debug mode, but in release
        // mode, an invalid proof will be generated which will not be accepted by any verifier.
        let degrees = vec![TransitionConstraintDegree::new(3)];
        WorkAir {
            context: AirContext::new(trace_info, degrees, options),
            start: pub_inputs.start,
            result: pub_inputs.result,
        }
    }

    // In this method we'll define our transition constraints; a computation is considered to
    // be valid, if for all valid state transitions, transition constraints evaluate to all
    // zeros, and for any invalid transition, at least one constraint evaluates to a non-zero
    // value. The `frame` parameter will contain current and next states of the computation.
    fn evaluate_transitionFrom<Self::BaseElement>>(
        &self,
        frame: &EvaluationFrame,
        _periodic_values: &[E],
        result: &mut [E],
    ) {
        // First, we'll read the current state, and use it to compute the expected next state
        let current_state = &frame.current()[0];
        let next_state = current_state.exp(3u32.into()) + E::from(42u32);

        // Then, we'll subtract the expected next state from the actual next state; this will
        // evaluate to zero if and only if the expected and actual states are the same.
        result[0] = frame.next()[0] - next_state;
    }

    // Here, we'll define a set of assertions about the execution trace which must be satisfied
    // for the computation to be valid. Essentially, this ties computation's execution trace
    // to the public inputs.
    fn get_assertions(&self) -> VecSelf::BaseElement>> {
        // for our computation to be valid, value in column 0 at step 0 must be equal to the
        // starting value, and at the last step it must be equal to the result.
        let last_step = self.trace_length() - 1;
        vec![
            Assertion::single(0, 0, self.start),
            Assertion::single(0, last_step, self.result),
        ]
    }

    // This is just boilerplate which is used by the Winterfell prover/verifier to retrieve
    // the context of the computation.
    fn context(&self) -> &AirContext<Self::BaseElement> {
        &self.context
    }
}

Now, we are finally ready to generate a STARK proof. The function below, will execute our computation, and will return the result together with the proof that the computation was executed correctly.

use winterfell::{
    math::{fields::f128::BaseElement, FieldElement},
    ExecutionTrace, FieldExtension, HashFunction, ProofOptions, StarkProof,
};

pub fn prove_work() -> (BaseElement, StarkProof) {
    // We'll just hard-code the parameters here for this example.
    let start = BaseElement::new(3);
    let n = 1_048_576;

    // Build the execution trace and get the result from the last step.
    let trace = build_do_work_trace(start, n);
    let result = trace.get(0, n - 1);

    // Define proof options; these will be enough for ~96-bit security level.
    let options = ProofOptions::new(
        32, // number of queries
        8,  // blowup factor
        0,  // grinding factor
        HashFunction::Blake3_256,
        FieldExtension::None,
        8,   // FRI folding factor
        128, // FRI max remainder length
    );

    // Generate the proof.
    let pub_inputs = PublicInputs { start, result };
    let proof = winterfell::prove::(trace, pub_inputs, options).unwrap();

    (result, proof)
}

We can then give this proof (together with the public inputs) to anyone, and they can verify that we did in fact execute the computation and got the claimed result. They can do this like so:

panic!("something went terribly wrong!"), } } ">
pub fn verify_work(start: BaseElement, result: BaseElement, proof: StarkProof) {
    // The number of steps and options are encoded in the proof itself, so we
    // don't need to pass them explicitly to the verifier.
    let pub_inputs = PublicInputs { start, result };
    match winterfell::verify::(proof, pub_inputs) {
        Ok(_) => println!("yay! all good!"),
        Err(_) => panic!("something went terribly wrong!"),
    }
}

That's all there is to it! As mentioned above, the examples crate contains examples of much more interesting computations (together with instructions on how to compile and run these examples). So, do check it out.

Performance

The Winterfell prover's performance depends on a large number of factors including the nature of the computation itself, efficiency of encoding the computation in AIR, proof generation parameters, hardware setup etc. Thus, the benchmarks below should be viewed as directional: they illustrate the general trends, but concrete numbers will be different for different computations, choices of parameters etc.

The computation we benchmark here is a chain of Rescue hash invocations (see examples for more info). The benchmarks were run on Intel Core i9-9980KH @ 2.4 GHz and 32 GB of RAM using all 8 cores.

Chain length Trace time 100 bit security 128 bit security R1CS equiv.
Proving time Proof size Proving time Proof size
210 0.1 sec 0.04 sec 65 KB 0.07 sec 102 KB 218 constr.
212 0.4 sec 0.14 sec 81 KB 0.25 sec 128 KB 220 constr.
214 1.4 sec 0.6 sec 100 KB 1 sec 156 KB 222 constr.
216 6 sec 2.5 sec 119 KB 4 sec 184 KB 224 constr.
218 24 sec 11 sec 141 KB 18 sec 216 KB 226 constr.
220 94 sec 50 sec 166 KB 89 sec 252 KB 228 constr.

A few remarks about these benchmarks:

  • Trace time is the time it takes to generate an execution trace for the computation. This time does not depend on the chosen security level. For this specific computation, trace generation must be sequential, and thus, cannot take advantage of multiple cores. However, for other computations, where execution trace can be generated in parallel, trace time would be much smaller in relation to the proving time (see below).
  • R1CS equiv. is a very rough estimate of how many R1CS constraints would be required for this computation. The assumption here is that a single invocation of Rescue hash function requires ~250 R1CS constraints.
  • Not included in the table, the time it takes to verify proofs in all benchmarks above is between 2 ms and 6 ms using a single CPU core.
  • As can be seen from the table, with STARKs, we can dynamically trade off proof size, proof security level, and proving time against each other.

Let's benchmark another example. This time our computation will consist of verifying many Lamport+ signatures (see example). This is a much more complicated computation. For comparison, execution trace for Rescue hash chain requires only 4 columns, but for Lamport+ signature verification we use 22 columns. The table below shows benchmarks for verifying different numbers of signatures on the same 8-core machine (at 123-bit security level).

# of signatures Trace time Proving time Prover RAM Proof size Verifier time
64 0.2 sec 1.2 sec 0.5 GB 110 KB 4.4 ms
128 0.4 sec 2.6 sec 1.0 GB 121 KB 4.4 ms
256 0.8 sec 5.3 sec 1.9 GB 132 KB 4.5 ms
512 1.6 sec 10.9 sec 3.8 GB 139 KB 4.9 ms
1024 3.2 sec 20.5 sec 7.6 GB 152 KB 5.9 ms

A few observations about these benchmarks:

  • Trace time and prover RAM (RAM consumed by the prover during proof generation) grow pretty much linearly with the size of the computation.
  • Proving time grows very slightly faster than linearly with the size of the computation.
  • Proof size and verifier time grow much slower than linearly (actually logarithmically) with the size of the computation.

Another difference between this example and Rescue hash chain is that we can generate execution trace for each signature verification independently, and thus, we can build the entire trace in parallel using multiple threads. In general, Winterfell prover performance scales nearly linearly with every additional CPU core. This is because nearly all steps of STARK proof generation process can be parallelized. The table below illustrates this relationship on the example of verifying 1024 Lamport+ signatures (at 123-bit security level). This time, our benchmark machine is AMD EPYC 7003 with 64 CPU cores.

Threads Trace time Proving time Total time (trace + proving) Improvement
1 28 sec 127 sec 155 sec 1x
2 14 sec 64 sec 78 sec 2x
4 6.7 sec 33 sec 39.7 sec 3.9x
8 3.8 sec 17 sec 20.8 sec 7.5x
16 2 sec 10.3 sec 12.3 sec 12.6x
32 1 sec 6 sec 7 sec 22.1x
64 0.6 sec 3.8 sec 4.4 sec 35.2x

References

If you are interested in learning how STARKs work under the hood, here are a few links to get you started. From the standpoint of this library, arithmetization is by far the most important concept to understand.

Vitalik Buterin's blog series on zk-STARKs:

StarkWare's STARK Math blog series:

License

This project is MIT licensed.

Acknowledgements

The original codebase was developed by Irakliy Khaburzaniya (@irakliyk) with contributions from François Garillot (@huitseeker), Kevin Lewi (@kevinlewi), Konstantinos Chalkias (@kchalkias), and Jasleen Malvai (@Jasleen1).

Comments
  • Supporting RAPs

    Supporting RAPs

    Hi,

    First, this is a really nice project. Thank you!

    Onto my question: the recent Cairo paper describes using Randomized AIRs with Preprocessing (RAPs) for Cairo's memory. While they don't describe in detail how they modify the STARK protocol, it appears to allow the prover to first commit to a set of trace columns, then use that commitment to generate randomness that can be used in an additional set of trace columns. The main use case here is that with such capability, local constraints (as in AIR) can be used to verify global properties (such as that one column is a permutation of another, which is what is done in Cairo's memory).

    Is this of interest to anyone here? I'd really appreciate some pointers or advice on how the API for winterfell might be best modified to support this. Of course, if the modifications are of use to anyone here I'd be happy to contribute them.

    Thanks and have a nice weekend

    enhancement 
    opened by pgrinaway 21
  • feat: MDS mult. using FFT & special matrix

    feat: MDS mult. using FFT & special matrix

    This PR implements an optimization, discussed in the setting of the Poseidon hash function here, and relies on FFT-based fast matrix-vector multiplication techniques for circulant matrices. See here for a light exposition of the ideas used. By choosing an MDS matrix of a very special form, we are able to implement several optimizations on top of the FFT-based multiplication. More precisely, the MDS matrix we chose has components that are small powers of two in "frequency domain" and some of these powers are even equal to zero, provided we scale the (i)FFT appropriately. This translates to multiplications, in frequency domain, being substituted with shifts or even removed entirely. Moreover, the small powers of two entries permit very efficient scheduling of modular reductions. The special matrix we use was found using very optimized code that was developed by the Polygon Zero team. The current implementation benefited, in addition, of the discussions and insights of Hamish Ivey-Law and Jacqueline Nabaglo. Overall, the resulting improvements are around 20% compared to the original implementation.

    cla signed 
    opened by Al-Kindi-0 13
  • update `ExecutionTrace` to latest `TraceTable` struct in README example

    update `ExecutionTrace` to latest `TraceTable` struct in README example

    As per the latest v0.3.0, the struct ExecutionTrace must be replaced with TraceTable in the example of the README.md file.

    The PR updates the example and will make the example consistent with the latest v0.3.0 of Winterfell.

    cla signed 
    opened by bajpai244 11
  • Add check to speed up RP by 50 %

    Add check to speed up RP by 50 %

    This extra check speeds up Rescue Prime on my DELL XPS 16GB RAM with 50 %.

    Executing cargo criterion hash gives me:

    Before this commit: hash_rp64_256 (random) time: [31.142 us 31.200 us 31.292 us]

    After this commit: hash_rp64_256 (random) time: [17.937 us 17.951 us 17.966 us]

    My best guess for why this significant speedup happens is that the multiplication in add can sometimes be avoided.

    cla signed 
    opened by Sword-Smith 10
  • Multi-segment execution trace

    Multi-segment execution trace

    This PR builds on #77 and adds multi-segment execution trace support.

    • [x] Update TraceInfo struct.
    • [x] Update prover to commit to multiple trace segments.
    • [x] Update proof struct to support queries against multiple trace segments.
    • [x] Update Air trait to support handling of auxiliary trace columns, including for transition and boundary constraints.
    • [x] Update verifier to support constraint evaluation and composition for auxiliary trace segments.
    • [x] Update prover to support constraint evaluation and composition for auxiliary trace segments.
    cla signed 
    opened by irakliyk 8
  • Make all crates (except examples) no-std compliant

    Make all crates (except examples) no-std compliant

    I attempted to make all crates no-std for my personal use, but assumed it could be useful in the main repository, following what has been said in https://github.com/novifinancial/winterfell/issues/37. The crates now rely on the std feature by default, which can be replaced by the alloc feature.

    The rand dependency actually mentioned in the linked issue is not moved to dev-dependency, so that the examples crate can still provide the merkle and lamport examples without changes. I don't know if that's problematic / something that you really wanted to get rid of. Note that the examples crate is not no-std, but it can take the prover and verifier dependencies with the alloc feature enabled only (at the cost of losing the two examples mentioned previously).

    The CI includes now a check for all-crates with target wasm32-unknown-unknown.

    If the pull-request is of any interest and requires change, please let me know what should be fixed!

    cla signed 
    opened by Nashtare 7
  • Performance improvement in Matrix::interpolate_columns

    Performance improvement in Matrix::interpolate_columns

    Instead of cloning the entire matrix with a single .clone(), which will clone each column vector sequentially, do each column clone inside the parallelized loop. This has a significant performance impact when run with concurrency.

    Running the stark-bench benchmarks on a 64-core AWS machine (c7g.16xlarge) leads to a ~57% improvement in the interpolation step. UPDATE: The parameters I used to obtain this result were stark-bench -c 255 -n 23 -b 2.

    cla signed 
    opened by jimpo 6
  • Unable to compile examples

    Unable to compile examples

    vadym@ubuntu:~/winterfell$ cargo build --release --manifest-path examples/Cargo.toml --features concurrent
      Downloaded crossbeam-epoch v0.9.5
      Downloaded memoffset v0.6.4
      Downloaded rayon-core v1.9.1
      Downloaded crossbeam-utils v0.8.5
      Downloaded rayon v1.5.1
      Downloaded crossbeam-deque v0.8.1
      Downloaded crossbeam-channel v0.5.1
      Downloaded 7 crates (423.1 KB) in 0.75s
       Compiling autocfg v1.0.1
       Compiling crossbeam-utils v0.8.5
       Compiling crossbeam-epoch v0.9.5
       Compiling scopeguard v1.1.0
       Compiling rayon-core v1.9.1
       Compiling either v1.6.1
       Compiling num_cpus v1.13.0
       Compiling structopt v0.3.25
       Compiling memoffset v0.6.4
       Compiling rayon v1.5.1
       Compiling crossbeam-channel v0.5.1
       Compiling crossbeam-deque v0.8.1
       Compiling winter-utils v0.2.0 (/home/vadym/winterfell/utils/core)
       Compiling winter-math v0.2.0 (/home/vadym/winterfell/math)
       Compiling winter-rand-utils v0.2.0 (/home/vadym/winterfell/utils/rand)
       Compiling winter-crypto v0.2.0 (/home/vadym/winterfell/crypto)
    error[E0277]: `[B; N]` is not an iterator
      --> crypto/src/hash/rescue/mod.rs:23:27
       |
    23 |     result.iter_mut().zip(tail).for_each(|(r, t)| *r *= t);
       |                           ^^^^
       |                           |
       |                           expected an implementor of trait `IntoIterator`
       |                           help: consider borrowing here: `&tail`
       |
       = note: the trait bound `[B; N]: IntoIterator` is not satisfied
       = note: required because of the requirements on the impl of `IntoIterator` for `[B; N]`
    
    error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>`, but its trait bounds were not satisfied
      --> crypto/src/hash/rescue/mod.rs:23:33
       |
    23 |     result.iter_mut().zip(tail).for_each(|(r, t)| *r *= t);
       |                                 ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>` due to unsatisfied trait bounds
       | 
      ::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
       |
    13 | pub struct Zip<A, B> {
       | -------------------- doesn't satisfy `_: Iterator`
       |
       = note: the following trait bounds were not satisfied:
               `[B; N]: Iterator`
               which is required by `std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>: Iterator`
               `std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>: Iterator`
               which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, B>, [B; N]>: Iterator`
    
    error[E0277]: `[[winter_math::fields::f62::BaseElement; 12]; 12]` is not an iterator
       --> crypto/src/hash/rescue/rp62_248/mod.rs:262:27
        |
    262 |     result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
        |                           ^^^
        |                           |
        |                           expected an implementor of trait `IntoIterator`
        |                           help: consider borrowing here: `&MDS`
        |
        = note: the trait bound `[[winter_math::fields::f62::BaseElement; 12]; 12]: IntoIterator` is not satisfied
        = note: required because of the requirements on the impl of `IntoIterator` for `[[winter_math::fields::f62::BaseElement; 12]; 12]`
    
    error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>`, but its trait bounds were not satisfied
       --> crypto/src/hash/rescue/rp62_248/mod.rs:262:32
        |
    262 |     result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
        |                                ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>` due to unsatisfied trait bounds
        | 
       ::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
        |
    13  | pub struct Zip<A, B> {
        | -------------------- doesn't satisfy `_: Iterator`
        |
        = note: the following trait bounds were not satisfied:
                `[[winter_math::fields::f62::BaseElement; 12]; 12]: Iterator`
                which is required by `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>: Iterator`
                `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>: Iterator`
                which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [[winter_math::fields::f62::BaseElement; 12]; 12]>: Iterator`
    
    error[E0277]: `[winter_math::fields::f62::BaseElement; 12]` is not an iterator
       --> crypto/src/hash/rescue/rp62_248/mod.rs:311:26
        |
    311 |     state.iter_mut().zip(acc).for_each(|(s, a)| *s *= a);
        |                          ^^^
        |                          |
        |                          expected an implementor of trait `IntoIterator`
        |                          help: consider borrowing here: `&acc`
        |
        = note: the trait bound `[winter_math::fields::f62::BaseElement; 12]: IntoIterator` is not satisfied
        = note: required because of the requirements on the impl of `IntoIterator` for `[winter_math::fields::f62::BaseElement; 12]`
    
    error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>`, but its trait bounds were not satisfied
       --> crypto/src/hash/rescue/rp62_248/mod.rs:311:31
        |
    311 |     state.iter_mut().zip(acc).for_each(|(s, a)| *s *= a);
        |                               ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>` due to unsatisfied trait bounds
        | 
       ::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
        |
    13  | pub struct Zip<A, B> {
        | -------------------- doesn't satisfy `_: Iterator`
        |
        = note: the following trait bounds were not satisfied:
                `[winter_math::fields::f62::BaseElement; 12]: Iterator`
                which is required by `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>: Iterator`
                `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>: Iterator`
                which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f62::BaseElement>, [winter_math::fields::f62::BaseElement; 12]>: Iterator`
    
    error[E0277]: `[[winter_math::fields::f64::BaseElement; 12]; 12]` is not an iterator
       --> crypto/src/hash/rescue/rp64_256/mod.rs:262:27
        |
    262 |     result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
        |                           ^^^
        |                           |
        |                           expected an implementor of trait `IntoIterator`
        |                           help: consider borrowing here: `&MDS`
        |
        = note: the trait bound `[[winter_math::fields::f64::BaseElement; 12]; 12]: IntoIterator` is not satisfied
        = note: required because of the requirements on the impl of `IntoIterator` for `[[winter_math::fields::f64::BaseElement; 12]; 12]`
    
    error[E0599]: the method `for_each` exists for struct `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>`, but its trait bounds were not satisfied
       --> crypto/src/hash/rescue/rp64_256/mod.rs:262:32
        |
    262 |     result.iter_mut().zip(MDS).for_each(|(r, mds_row)| {
        |                                ^^^^^^^^ method cannot be called on `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>` due to unsatisfied trait bounds
        | 
       ::: /home/vadym/snap/rustup/common/rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/zip.rs:13:1
        |
    13  | pub struct Zip<A, B> {
        | -------------------- doesn't satisfy `_: Iterator`
        |
        = note: the following trait bounds were not satisfied:
                `[[winter_math::fields::f64::BaseElement; 12]; 12]: Iterator`
                which is required by `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>: Iterator`
                `std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>: Iterator`
                which is required by `&mut std::iter::Zip<std::slice::IterMut<'_, winter_math::fields::f64::BaseElement>, [[winter_math::fields::f64::BaseElement; 12]; 12]>: Iterator`
    
    error: aborting due to 8 previous errors
    
    Some errors have detailed explanations: E0277, E0599.
    For more information about an error, try `rustc --explain E0277`.
    error: could not compile `winter-crypto`
    
    To learn more, run the command again with --verbose.
    vadym@ubuntu:~/winterfell$
    
    opened by vikulin 6
  • Custom evaluation frames

    Custom evaluation frames

    This is an incomplete and very rough outline of the approach that I had in mind for issue #80. I've highlighted code segments below with some questions.

    cla signed 
    opened by maxgillett 5
  • Add Rescue Prime as a hash function option for prover/verifier

    Add Rescue Prime as a hash function option for prover/verifier

    Rescue Prime hash function has been implemented as a part of #50. However, it is not yet integrated into the prover and verifier.

    There is one challenging aspect here: RP62_248 works only in f62 field, however, prover and verifier expect the hash function to be generic over all fields. It would be nice if we could do some kind of "type narrowing" - e.g. if there is a mismatch between base field of AIR and hash function, a runtime error would be thrown, otherwise everything will work as expected. But I'm not sure if this is currently possible in Rust.

    enhancement 
    opened by irakliyk 5
  • feat: implement Montgomery form for f64

    feat: implement Montgomery form for f64

    This PR implements modular arithmetic for the prime field of order $2^{64} - 2^{32} + 1$ using Montgomery form. The implementation follows the one in this article and is constant-time. Some of the tests were changed in order to adapt them to the Montgomery form logic. This implementation achieves varying degrees of improvements over the previous implementation e.g. NTT is faster by ~5% percent while Rescue prime is up to 30% faster.

    cla signed 
    opened by Al-Kindi-0 4
  • Rescue and griffin with Jive compression mode

    Rescue and griffin with Jive compression mode

    This PR aims at providing additional options for algebraic hash functions support, in particular in the light of two recent works:

    The PR implements two new instantiations of hash functions, more suited to Winterfell's purpose as tailored towards Merkle tree compression vs arbitrary data slice hashing:

    • an instantiation of Rescue-Prime over f64 using Jive compression mode: it follows the same specification changes as Rp64_256, except that this instance has a state width of 8 (instead of 12). In particular the security margin has also been reduced to 40% instead of 50% to obtain 7 rounds.
    • an instantiation of Griffin over f64 using Jive compression mode: hence also having a state width of 8. The security margin has been reduced to 15% instead of 20% to also obtain 7 rounds. However, (see below), the modified matrix for the linear layer is MDS and has a larger branch factor, which could alleviate some potential concerns regarding the margin reduction.

    Both instances use the same MDS matrix, crafted in the spirit of Rp64_256 matrix, i.e. a circulant matrix of size 8 with coefficients in frequency domain that are powers of two, to rely on FFTs for efficient matrix-vector product in the linear layer. Its first row is [23, 8, 13, 10, 7, 6, 21, 8]. I cannot thank @Al-Kindi-0 enough for helping me through the process and providing helper code. The best Christmas gift I got this year!

    Note on the MDS matrix choice: I was actually planning on having some deterministic process to generate a valid one, but by taking the one of Rp64_256 and trimming the first subvector to test my code, I realized the obtained matrix was actually MDS.

    As for performance, I benchmarked the two functions (and compared with Rp64_256) for two variants of the fib-small example:

    • Large but faster instance: ./target/release/winterfell -e 3 -b 2 -q 112 --hash_fn HASH_FN fib-small -n 262144

    • Small but slower instance: ./target/release/winterfell -e 3 -b 8 -q 42 --hash_fn HASH_FN fib-small -n 262144

    each targeting 128-bit security level. The results on my machine (Ubuntu 22.04 - Intel® Core™ i7-9750H CPU @ 2.60GHz × 12) are the following:

    • Large instance: -- Rp64_256: Prover: 3100 ms / Verifier: 60 ms -- RpJive64_256: Prover: 2100 ms / Verifier: 39 ms (i.e. 1.48x / 1.54x faster than Rp64_256) -- GriffinJive64_256: Prover: 550 ms / Verifier: 17 ms (i.e. 5.64x / 3.53x faster than Rp64_256)

    • Small instance: -- Rp64_256: Prover: 10400 ms / Verifier: 35 ms -- RpJive64_256: Prover: 8400 ms / Verifier: 23 ms (i.e. 1.24x / 1.52x faster than Rp64_256) -- GriffinJive64_256: Prover: 2000 ms / Verifier: 8 ms (i.e. 5.20x / 4.38x faster than Rp64_256)

    I couldn't find some reference implementation of Griffin in Python for easily testing the code, so I made one here.

    I guess this will take time to review / discuss as it's holidays season, but I wanted to share this before 2022 ends! Happy new year! 🥳

    cla signed 
    opened by Nashtare 1
  • More efficient remainder verification in FRI

    More efficient remainder verification in FRI

    Currently, the last step in FRI verification is https://github.com/novifinancial/winterfell/blob/e43b75d365e03a71288ede25a6c2c9d9dc021a4d/fri/src/verifier/mod.rs#L307-L319

    This is inefficient in, at least, two ways:

    1. The commitment to the remainder is a Merkle tree. This is inefficient because we don't really use the property of vector-commitment that a Merkle tree provides in the above implementation and we can in fact get away with the weaker notion of set-commitment, provided the set is composed of elements of the form (position,remainder[position]).
    2. Combined with the previous point, the verifier can, instead of checking that remainder[position] == evaluation, just compute the set commitment to (position,evaluations[position]) and then check that the two set-commitments are equal. This assumes that the queries at the initial layer are drawn in such a way so as the last folded positions span the range [0, remainder.len() - 1] . I assume this is already implemented in Winterfell but I haven't checked.

    As for the set-commitment, a sequential hash of remainder[i] for $i$ in increasing order might be sufficient.

    opened by Al-Kindi-0 1
  • New FftInputs trait

    New FftInputs trait

    This PR partly addresses #93. It introduces a FftInput trait in the maths crate, which defines the shape of the input for the FFT computation. This trait is implemented on [E]in this PR.

    cla signed 
    opened by 0xKanekiKen 2
  • merkle consistency proofs

    merkle consistency proofs

    It doesn't look like the merkle crate supports consistency proofs between two different trees, but it does support (batched) inclusion proofs for leaves. Would y'all be open to adding consistency proof support?

    opened by chris-wood 5
  • Field security origin?

    Field security origin?

    In the conjectured security evaluation, we have: let field_security = field_size - lde_domain_size.trailing_zeros(); ( i.e. $\lambda \leq \log_2(\mathbb{K}) - \log_2(D)$ ) which bounds the security level of a given STARK proof by the bit size of the extension field on which we operate minus the bit size of the execution trace (after extension). This conflicts with the conjectured security formulae given in EthSTARK paper, page 40, where for the field security we only consider the size of the extension field.

    I was then wondering where this $- \log(D)$ was coming from? Would it have a link with the much more complex formula for proven security?

    If that is the case, then wouldn't it make sense to stick to the "conjectured field security" estimate for the conjectured security level?

    opened by Nashtare 1
  • [DRAFT]Custom evaluation frame

    [DRAFT]Custom evaluation frame

    This PR continues the work done in PR #91 and partially addresses #80. This PR defines an evaluation frame trait with a 2-row consecutive frame instead of a multi-row frame. The latter will be tackled in future PRs.

    cla signed 
    opened by 0xKanekiKen 0
Releases(v0.4.2)
  • v0.4.2(Nov 15, 2022)

  • v0.4.1(Oct 25, 2022)

    • Increased transition constraint exemption limit by 1.
    • Implemented custom doubling for f64 and f62 field.
    • Moved f64 field arithmetic to Montgomery form (constant time).
    • Updated MDS matrix and related-multiplication routine for Rp64_256 hash function.
    • Improved performance of Matrix::interpolate_columns function.
    • Added ability to "unbatch" a BatchMerkleProof (via BatchMerkleProof::into_paths() method).
    • Changed visibility of FRI utils (made them public).
    • Added support for FRI folding factor of 2 (in addition to 4, 8, and 16).
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Apr 29, 2022)

    • Added support for Randomized AIR (with example).
    • Added support for custom number of transition constraint exemptions.
    • Enabled transition constraints of degree n + 1 when blowup factor is n.
    • Moved trace and constraint commitment construction into separate functions in the Prover trait.
    • Introduced Matrix struct in the prover which is used as a backing type for trace and constraint evaluations.
    • Added ExtensionOf trait and implemented it for all supported fields.
    • Sped up inversion in f64 field by using inversion method based on Fermat’s little theorem.
    • Implemented Randomizable trait for u32, u16, and u8 types.
    • [BREAKING] AirContext::new() now requires num_assertions parameter.
    • [BREAKING] Various interface changes in the Air trait to support multi-segment traces.
    • Increased min version of rustc to 1.60.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Jan 5, 2022)

    • Added f64 field.
    • Added support for cubic field extensions.
    • Added an implementation of Rescue Prime hash function in f64 field.
    • Switched to Rust 2021 and increased min version of rustc to 1.57.
    • [BREAKING] Renamed Air::BaseElement to Air::BaseField.
    • [BREAKING] Replaced prover::prove() function with Prover trait.
    • [BREAKING] Split ExecutionTrace struct into Trace trait and TraceTable struct.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Aug 24, 2021)

    • Added Blake3_192 as hash function option.
    • Implemented high-performance version of Rescue Prime hash function.
    • Removed alloc feature in favor of turning on no_std via --no-default-features flag only.
    • Moved rand dependency to dev-dependencies only and removed hashbrown dependency.
    • Increased min version of rustc to 1.54.
    Source code(tar.gz)
    Source code(zip)
Owner
Novi
A connected wallet for a connected world
Novi
Number names is a Rust library to provide formatted string names for cardinal and ordinal numbers.

Number Names Number names is a Rust library to provide formatted string names for cardinal and ordinal numbers. At this time, only American English is

Calteran 12 Dec 5, 2022
Neat 3D math and graphics library

Rust crate for plane-based projective geometric-algebra for 3D aka the Clifford Algebra with signature P(R*3,0,1).

null 25 Dec 14, 2022
STARK Cairo prover using lambdaworks

STARK Cairo prover using lambdaworks. Cairo (CPU Algebraic Intermediate Representation) is a programming language for writing provable programs, where one party can prove to another that a certain computation was executed correctly. Cairo and similar proof systems can be used to provide scalability to blockchains.

Lambdaclass 18 Jun 13, 2023
This crate is an implementation of Sonic, a protocol for quickly verifiable, compact zero-knowledge proofs of arbitrary computations

Sonic This crate is an implementation of Sonic, a protocol for quickly verifiable, compact zero-knowledge proofs of arbitrary computations. Sonic is i

null 75 Jul 4, 2022
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
Single-reader, multi-writer & single-reader, multi-verifier; broadcasts reads to multiple writeable destinations in parallel

Bus Writer This Rust crate provides a generic single-reader, multi-writer, with support for callbacks for monitoring progress. It also provides a gene

Pop!_OS 26 Feb 7, 2022
A certificate verification library for rustls that uses the operating system's verifier

rustls-platform-verifier A Rust library to verify the validity of TLS certificates based on the operating system's certificate facilities. On operatin

null 17 Dec 26, 2022
A certificate verification library for rustls that uses the operating system's verifier

rustls-platform-verifier A Rust library to verify the validity of TLS certificates based on the operating system's certificate facilities. On operatin

null 13 Nov 6, 2022
Simple, bare-minimum recaptcha verifier helper

recaptcha-verify Simple, bare-minimum recaptcha verifier helper Quick Start This library is supposed to be a (near) drop-in replacement for recaptcha-

Ivan Ganev 4 Oct 20, 2023
STARK - SNARK recursive zero knowledge proofs, combinaison of the Winterfell library and the Circom language

STARK - SNARK recursive proofs The point of this library is to combine the SNARK and STARK computation arguments of knowledge, namely the Winterfell l

Victor Colomb 68 Dec 5, 2022
ANISE provides an open-source and open-governed library and algorithmic specification for most computations for astrodynamics

ANISE provides an open-source and open-governed library and algorithmic specification for most computations for astrodynamics. It is heavily inspired by NAIF SPICE, and may be considered as an open-source modern rewrite of SPICE.

ANISE 4 Mar 9, 2022
STARK-based virtual machine

Polygon Miden A STARK-based virtual machine. WARNING: This project is in an alpha stage. It has not been audited and may contain bugs and security fla

Polygon (previously Matic) 415 Dec 28, 2022
STARK 101 Workshop in Rust 🐺🦀

STARK101-rs ?? About This repository is based on the STARK 101 workshop, originally written in Python. A Rust tutorial for a basic STARK (Scalable Tra

Lambdaclass 30 Feb 23, 2023
STARK 101 Workshop in Rust 🐺🦀

STARK101-rs ?? About This repository is based on the STARK 101 workshop, originally written in Python. A Rust tutorial for a basic STARK (Scalable Tra

Lambdaclass 80 Apr 21, 2023
Fast and correct computations with uncertain values.

Uncertain<T> Fast and correct computations with uncertain values. When working with values which are not exactly determined, such as sensor data, it c

Tilman Roeder 89 Nov 28, 2022
A standalone Aleo prover build upon snarkOS and snarkVM, with multi-threading optimization

Aleo Light Prover Introduction A standalone Aleo prover build upon snarkOS and snarkVM, with multi-threading optimization. It's called "light" because

Haruka Ma 91 Dec 29, 2022
Scalable layer-2 registry and prover for subspaces

Subspacer Note: this does not fully implement the functionality described in the Spaces protocol yet and should be considered a proof of concept. Scal

Spaces Protocol 5 Feb 22, 2024
A library for advanced finite element computations in Rust

A Rust library for building advanced applications with the Finite Element Method (FEM). Although developed with a special emphasis on solid mechanics

Interactive Computer Graphics 55 Dec 26, 2022
zenoh-flow aims at providing a zenoh-based data-flow programming framework for computations that span from the cloud to the device.

Eclipse Zenoh-Flow Zenoh-Flow provides a zenoh-based dataflow programming framework for computations that span from the cloud to the device. ⚠️ This s

null 35 Dec 12, 2022
An experimental rust zksnarks compiler with embeeded bellman-bn128 prover

Za! An experimental port of the circom zk-SNARK compiler in Rust with embedded bellman-bn128 prover. I created it as a PoC port of the existing JavaSc

adria0.eth 39 Aug 26, 2022