Noir is a domain specific language for zero knowledge proofs

Overview

The Noir Programming Language

Noir is a Domain Specific Language for SNARK proving systems. It has been designed to use any ACIR compatible proving system.

This implementation is in early development. It has not been reviewed or audited. It is not suitable to be used in production. Expect bugs!

Quick Start

Read the installation section here

Once you have read through the documentation, you can also run the examples located in the examples folder.

Current Features

Backends:

  • Barretenberg via FFI

Compiler:

  • Module System
  • For expressions
  • Arrays
  • Bit Operations, except for OR
  • Binary operations (<, <=, >, >=, +, -, *, /) [See documentation for an extensive list]
  • Unsigned integers

ACIR Supported OPCODES:

  • Sha256
  • Blake2s
  • Schnorr signature verification
  • MerkleMembership
  • Pedersen
  • HashToField

Future Work

The current focus is to gather as much feedback as possible while in the alpha phase. The main focusses of Noir are safety and developer experience. If you find a feature that does not seem to be inline with these goals, please open an issue!

Concretely the following items are on the road map:

  • If statements
  • OR operator
  • General code sanitisation and documentation
  • Prover and Verifier Key logic. (Prover and Verifier preprocess per compile)
  • Structures
  • Visibility modifiers
  • Signed integers
  • Backend integration: (Marlin, Bulletproofs)
  • Recursion

License

Noir is free and open source. It is distributed under a dual license. (MIT/APACHE)

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Barretenberg License

Barretenberg is currently the only backend that Noir has integrated. It is licensed under GPL V2.0.

Comments
  • Refactor Nargo CLI errors and separate I/O from core logic

    Refactor Nargo CLI errors and separate I/O from core logic

    Description

    Summary of changes

    Nargo currently doesn't have a good separation between loading values from Prover.toml/Verifier.toml and using these to build/verify proofs. This PR splits the relevant functions in two so that we load all the inputs and then pass them to a different function in order to validate and process them.

    No changes to functionality should exist except some error messages have been modified to no longer refer to the toml files. We also no longer write public inputs to Verifier.toml if proof generation fails.

    Dependency additions / changes

    N/A

    Test additions / changes

    N/A

    Checklist

    • [x] I have tested the changes locally.
    • [x] I have formatted the changes with Prettier and/or cargo fmt with default settings.
    • [x] I have linked this PR to the issue(s) that it resolves.
    • [x] I have reviewed the changes on GitHub, line by line.
    • [x] I have ensured all changes are covered in the description.

    Additional context

    N/A

    opened by TomAFrench 11
  • Change unused variables to warnings

    Change unused variables to warnings

    Changes the unused variable error to a warning since they should be optimized out by the ssa pass anyway. This affects all variables, even parameters to main. This is also noir's first warning instead of a hard error so some small plumbing was added to implement warnings.

    opened by jfecher 10
  • Merge jf/ssa

    Merge jf/ssa

    Reverts noir-lang/noir#222 (which itself was a revert). In other words, this PR has been re-opened so it can be reviewed by @guipublic.

    Original PR Description:

    This PR contains the changes from

    • The Operation refactoring
    • The deletion refactoring
    • The ArrayId refactoring.

    It currently passes all tests in cargo test when using evaluate_main_alt

    opened by jfecher 10
  • Refactor Operation enum

    Refactor Operation enum

    This refactoring is part of a larger effort to clean up the ssa code to use stronger types and rely less on developers memorizing certain invariants.

    In master, Operation is currently only a tag value yet how many kinds of arguments an instruction takes and what types those arguments are expected to be depends on this tag value. As an example, most ssa code assumes every instruction has two arguments (lhs: NodeId and rhs: NodeId) but quite a few instructions break this promise and it is up to developers to carefully maintain their invariants:

    • Operation::Not and other unary instructions only have 1 argument, and the expectation is to always duplicate lhs = rhs
    • Jmp and other jump instructions should hold a block to jump to, but since neither lhs nor rhs are of type BlockId, they hold the first instruction of a block instead. This obscures code and makes mistakes more likely (for example - what happens if this instruction is optimized away?)
    • Assign does take 2 arguments but in an order which may be confusing - rhs := lhs. That is the location to assign to is the rhs rather than the lhs
    • Call and Results previously needed to be separated into 2 instructions since the Instruction struct only allowed for 1 Vec of variable arguments. This means every Results instruction we need to go back and search for the call it refers to. This is yet to be changed in this PR but we now can have a single instruction Call(args, results) to store both.
    • ... and others

    This is a rather large change so I'll be starting the pattern discussed in slack of merging this to a new branch jf/ssa rather than master. ~~It is also unfinished (but is safe to be merged anyway since it is to a new branch). I'm still working on refactoring optim.rs and integer.rs~~

    This is also the first PR in a few I have planned. The next will be on refactoring Instruction::is_deleted so a few todo!("Refactor deletions") remain in this PR where those spots need to be changed. They aren't included in this PR to keep it "small."

    opened by jfecher 10
  • to_bytes in stdlib

    to_bytes in stdlib

    Problem

    It's currently inconvenient to convert a Field element into [u8]. This seems like a common use case, e.g. when incorporating a Field element into a message for signature checks, or into the input of various hash functions.

    Solution

    Define to_bytes as part of the stdlib.

    Alternatives considered

    Document a convenient and recommended snippet for writing the above function directly into your circuit.

    enhancement 
    opened by joss-aztec 9
  • Change constrain to accept any boolean expression

    Change constrain to accept any boolean expression

    Implements #235

    Changes the syntax for constrain from constrain <expr> compare_op <expr> to constrain <expr> where the now-single expression can be any boolean value.

    The lack of duplicated handling of constrain operators lead to a lot of deletion of duplicate code and some bug fixes in type checking regarding this duplicate code often handling the various comparison operators differently when they're in a constrain statement versus outside of them.

    There have been some ssa tests that rely upon == being implemented for arrays which was previously only possible inside of a constrain. The type checker has been slightly changed to allow this for all uses of == to maintain functionality.

    This change caught some odd cases in tests which failed to be caught previously, including some comparisons of Field == u8 and even []u8 == u8 in examples/11/src/main.nr.

    MVP 
    opened by jfecher 8
  • Implement for-each loops over arrays

    Implement for-each loops over arrays

    Related issue(s)

    Resolves #375

    Description

    Summary of changes

    Implements for-each loops as described in #375 via syntactic sugar.

    ~~Currently there is a rare bug that I may fix. In the expression for e in array { ... } we clone array internally causing it to be re-evaluated which could cause bugs if it mutates local variables. I'll add a commit soon to hoist a new let statement over the entire loop and wrap the loop in a block containing the let statement and the loop itself making the full desugared form:~~ Pushed a commit fixing the bug. The full desugaring for for-each loops is as follows:

    {
        let fresh_ident1 = array;
        for fresh_ident2 in 0 .. std::array::len(fresh_ident1) {
            let e = fresh_ident1[fresh_ident2];
            ...
        }
    }
    

    Test additions / changes

    Adds a quick nested for-each loop example to 6_array

    opened by jfecher 7
  • update backend ref

    update backend ref

    Related issue(s)

    Resolves this comment

    Description

    The barretenberg repo was renamed which causes the Noir build to fail on the CI as the aztec_backend reference in nargo's Cargo.toml file referenced the wrong repo. We are in the process of moving barretenberg to a new repo. I made a temporary PR that can be found here that will be used until we fix the issues from the barretenberg migration and can move to the updated repository.

    Summary of changes

    (Describe the changes in this PR. Point out breaking changes if any.)

    Dependency additions / changes

    (If applicable.)

    Test additions / changes

    (If applicable.)

    Checklist

    • [ ] I have tested the changes locally.
    • [ ] I have formatted the changes with Prettier and/or cargo fmt with default settings.
    • [ ] I have linked this PR to the issue(s) that it resolves.
    • [ ] I have reviewed the changes on GitHub, line by line.
    • [ ] I have ensured all changes are covered in the description.

    Additional context

    (If applicable.)

    opened by vezenovm 6
  • Allow more expressions to be used in array sizes

    Allow more expressions to be used in array sizes

    Related issue(s)

    Resolves #415

    Description

    Summary of changes

    Allows use of global variables and simple arithmetic expressions in addition to integer literals within array-size expressions.

    Previously an array with repeated-elements like the following: [0; 42] required an integer literal like 42 for the count of repeated elements. This PR expands this a bit such that it may include global integer variables, and simple operations on them: +, -, *, /, %, &, |, ^, and !.

    This is done by adding a mini-evaluator during name resolution which is not ideal. Ideally, we could allow any comptime Field expression by delaying this until ssa and relying on the inlining done in that pass to remove all function calls, comptime variable references, and fold all arithmetic. For now though, this PR brings us most of the way there.

    Test additions / changes

    Adds a quick addition to the globals test.

    Checklist

    • [x] I have tested the changes locally.
    • [x] I have formatted the changes with Prettier and/or cargo fmt with default settings.
    • [x] I have linked this PR to the issue(s) that it resolves.
    • [x] I have reviewed the changes on GitHub, line by line.
    • [x] I have ensured all changes are covered in the description.
    opened by jfecher 6
  • Adds ability to compile the compiler as a wasm module

    Adds ability to compile the compiler as a wasm module

    This allows the compiler to be a wasm module that can be called from typescript.

    There is a caveat:

    • The standard library cannot be compiled with the wasm version of the compiler. An issue for this will be opened.
    opened by kevaundray 6
  • Make Field generic

    Make Field generic

    Currently Noir is only compiles for bn254's scalar field, however there is nothing stopping it from being generic over any field.

    This issue starts a discussion on the work needed to be done to make the compiler generic, and considerations.

    Field Trait

    We would need a NoirField Trait instead of a FieldElement struct. The trait methods will initially be the methods that are on FieldElement.

    Noir will need to be aware of the possible Fields, however we can delegate this list to ACIR, similar to how we delegate all possible OPCODEs there.

    Security Level

    The security level of a group would need to be documented in Noir also, so that user can be warned when they are using a group which has < 128 bit security.

    Some considerations about the implementation:

    Question: Should Noir have implementations of all fields supported?

    Compiling with implementations for all supported fields, is the current idea and here are the advantages and disadvantages.

    Advantages

    • dyn will not need to be used.
    • There will be a single field element implementation per field that backends can choose to hook into.

    Disadvantages

    • program will now contain all possible Field element implementations, increasing the binary size.
    • If a backend wants to use a new field, they will need to first upstream the implementation and implement the NoirField trait for it (This may not be a disadvantage)
    • Backends will need to convert from a NoirField trait to the type their proving system uses. (unless they implement it over a NoirField Trait) This may mean a lot of deserialising and subsequent performance loss.

    Since NoirField trait can be implemented for the arkworks Field trait, we can immediately get all Field implementations that arkwork uses. For newer fields, I believe that the surface area for the NoirField trait is small and generic enough that wrapping a new field is negligible.

    Other ideas would be for backends to supply their own field element implementation to plug into Noir, however this encourages code duplication between backends and may lead to non-deterministic constraint systems in the same field, if the field element implementation differs.

    Compiler aware Field

    The compiler currently does not need to be aware of the chosen field implementation, since there is only one. It is however aware of the different backends and we would have the same mechanism where the chosen field is passed as a parameter value in the package manager config file, and the package manager picks the corresponding data structure which is then passed to the compiler.

    Rough Action steps

    • Replace the FieldElement struct with the Field trait, but still only compile for bn254.
    • Add all supported Fields noir_field as an enum. We then can add a new method in the Proving system trait supported_fields which allows the backends to specify which of these Fields they support. At this moment in time, there will only be one supported Field, which is bn254.
    • Add bls12-381 and allow the user to state they want to use bls12-381. The barretenberg backend should return an error, as it only supports bn254. We modify the arkworks backend adding bls12_381 to its supported_fields , adding the bls12_381 field as a viable field and testing whether it compiles.
    ACIR compiler backend nargo ACVM 
    opened by kevaundray 6
  • Fix for mapping the full array during acir gen for to_bytes and to_bits

    Fix for mapping the full array during acir gen for to_bytes and to_bits

    Related issue(s)

    Resolves #617

    Description

    After a byte decomposition, I want to be able to pass the full byte array along without having to do any hack-y methods to get around the compiler. Currently, this is not possible as we are panicking during acir generation. This is due to us incorrectly mapping the outputs of our bit/byte decomposition methods. Further information can be found in issue #617.

    The outputs from to_bits and to_bytes are equal to the size of the bit size or byte size specified in the Noir program. However, we currently restrict this to be less than the full bit size or byte size of a Field for safety. Both methods return fully decomposed arrays with [u1; 256] and [u8; 32] respectively. If I specified a byte_size of 16 for example and I want to access any elements past index 15 in the array returned from to_bytes this is not possible. This is specifically confusing for a developer if they want to pass along a decomposed array directly to another std_lib function such as verify_signature.

    Summary of changes

    After evaluating our decomposition methods we map the outputs to the array pointer (https://github.com/noir-lang/noir/blob/4b36ea01d5ce60c3fb8d1167d0772772ea02a368/crates/noirc_evaluator/src/ssa/acir_gen.rs#L563). This array is set when we are creating our instructions and has a specified length (such as 32 or 254). When we later want to access elements in the array such as in prepare_inputs we are looping over this mem array (https://github.com/noir-lang/noir/blob/4b36ea01d5ce60c3fb8d1167d0772772ea02a368/crates/noirc_evaluator/src/ssa/acir_gen.rs#L496) which has a larger length than outputs that have been mapped in memory. Thus, when we check the memory map here (https://github.com/noir-lang/noir/blob/4b36ea01d5ce60c3fb8d1167d0772772ea02a368/crates/noirc_evaluator/src/ssa/acir_gen.rs#L498) the if condition fails and we panic on this line (https://github.com/noir-lang/noir/blob/4b36ea01d5ce60c3fb8d1167d0772772ea02a368/crates/noirc_evaluator/src/ssa/acir_gen.rs#L511) as the mem array we are trying to access has an empty values array.

    I simply changed the map_array function to loop over the mem array length. When the index of the loop is still within the bounds of the outputs slice we map the output as before, otherwise we map zero.

    Dependency additions / changes

    (If applicable.)

    Test additions / changes

    I added a line that was previously failing to the to_bytes test.

    In my repo testing aztec circuits (https://github.com/vezenovm/aztec-circuits-noir) I was able to replace my utility function for converting a field into bytes and it is fully working with verify_signature now. This code is what originally brought this error to my attention. (https://github.com/vezenovm/aztec-circuits-noir/blob/4dde489370813739acab26514da1dcd80ac047ea/circuits/join_split/src/main.nr#L199)

    Checklist

    • [X] I have tested the changes locally.
    • [X] I have formatted the changes with Prettier and/or cargo fmt with default settings.
    • [X] I have linked this PR to the issue(s) that it resolves.
    • [X] I have reviewed the changes on GitHub, line by line.
    • [X] I have ensured all changes are covered in the description.

    Additional context

    (If applicable.)

    opened by vezenovm 0
  • Implement numeric generics

    Implement numeric generics

    Related issue(s)

    Resolves #606

    Description

    Summary of changes

    Allow generics to be used in array length types. For example, this allows

    fn id<N>(array: [Field; N]) -> [Field; N] {
        array
    }
    

    Structs can also be parameterized over arrays:

    struct Foo<N> {
        limbs: [u64; N],
    }
    

    Test additions / changes

    Added test numeric_generics

    Checklist

    • [x] I have tested the changes locally.
    • [x] I have formatted the changes with Prettier and/or cargo fmt with default settings.
    • [x] I have linked this PR to the issue(s) that it resolves.
    • [x] I have reviewed the changes on GitHub, line by line.
    • [x] I have ensured all changes are covered in the description.

    Additional context

    Some care was taken to avoid naming this feature "const generics" even though it is effectively quite similar to rust's const generics feature. Avoiding this name lets us:

    1. Be more clear that we may not support the exact same feature set rust supports here, while also allowing us room in the future to expand or branch off in a different direction.
    2. Avoid a confusing name to newcomers. Particularly the const keyword being co-opted for compile-time evaluated expressions rather than just constants is confusing. This feature is also not called comptime generics in noir as we do not currently allow comptime variables within these generics. This is because comptime variables can be passed in as function arguments and allowing for these to be referenced would require something closer to dependent types.
    opened by jfecher 2
  • for-each sugar does not work in the standard library.

    for-each sugar does not work in the standard library.

    Description

    Aim

    Using a for-each loop in a std lib file such as hash.nr:

    for elem in array {
        ...
    }
    

    Expected behavior

    It to loop over each element of the array.

    Bug

    Instead the compiler crashes claiming it cannot find the std module. This is likely due to the above loop desugaring into:

    for i in std::array::len(array) {
        let elem = array[i];
        ...
    }
    

    and the standard library expects the len function to be referred to as crate::array::len rather than std::array::len.

    There are two items that should be fixed here

    1. The compiler should not crash when a module is not found for import.
    2. We should allow modules to reference themselves by their own name in addition to crate. This enables us to have a non-relative module hierarchy where we do not need to change name references based on which module they are used in.
    bug 
    opened by jfecher 1
  • Bit/byte decomposition std lib functions not correctly mapping an array

    Bit/byte decomposition std lib functions not correctly mapping an array

    Description

    It is required to set the bit size or byte size in std::to_bits and std::to_bytes to something less than the full field for safety. However, the result type from both these functions is the full bit/byte size of a field element as you can see here

    If a Noir program uses to_bits or to_bytes and wants to access an element after the specified bit size or byte size there will be a panic. This isn't a huge deal as any elements after the set bit/byte size will just be 0. However, the panic can be confusing for developers as it occurs in the acir gen.

    Aim

    Allow developers to access the full array that is outputted from to_bits and to_bytes.

    Expected behavior

    There will be no panic when the full array outputted from to_bits or to_bytes is passed along to other functions and used in operations.

    Bug

    For example I was using to_bytes in this fashion:

     let message_byte_array = std::to_bytes(message, 31 as u32);
     let sig_res = std::schnorr::verify_signature(signer[0], signer[1], signature, message_byte_array);
    

    This would panic with this error:

    thread 'main' panicked at 'index out of bounds: the len is 0 but the index is 31', crates/noirc_evaluator/src/ssa/acir_gen.rs:512:50
    

    To reproduce

    You can also reproduce this with a slight alteration to the to_bytes test under nargo/tests/test_data.

    use dep::std;
    
    fn main(x : Field) -> pub [u8; 4] {
        // The result of this byte array will be little-endian
        let byte_array = std::to_bytes(x, 31);
        let mut first_four_bytes = [0; 4];
        for i in 0..4 {
            first_four_bytes[i] = byte_array[i];
        }
        // This line replaces 0 with 0 and is pointless but will cause a panic
        first_four_bytes[3] = byte_array[31];
        first_four_bytes
    }
    

    If you run nargo prove or nargo gates you will get this error:

    thread 'main' panicked at 'internal error: entered unreachable code: Could not find value at index 31', crates/noirc_evaluator/src/ssa/acir_gen.rs:190:29
    

    The location of the panic in acir_gen is slightly different as we are trying to access the mem array during a load operation rather than during prepare_inputs as I did in the example above.

    You can reproduce with to_bits as well:

    use dep::std;
    
    fn main(x : Field) -> pub [u1; 4] {
        // The result of this byte array will be little-endian
        let bit_array = std::to_bits(x, 31);
        let mut first_four_bits = [0; 4];
        for i in 0..4 {
            first_four_bits[i] = bit_array[i];
        }
        first_four_bits[3] = bit_array[253];
        first_four_bits
    }
    

    which will give you:

    thread 'main' panicked at 'internal error: entered unreachable code: Could not find value at index 253', crates/noirc_evaluator/src/ssa/acir_gen.rs:190:29
    

    Once again, even though the array returned should be [u1; 256] we get a panic as we are not fully mapping the array.

    Environment

    • OS: Mac OS Big Sur 11.2.3

    For nargo users

    • noir-lang/noir commit cloned: 7e9089ff61d6432be3ef4790f557b27470f1dae7
    • Proving backend
      • [X] default
        • Clang: (run clang --version)
        • clang through homebrew v11
      • [ ] wasm-base

    For TypeScript users

    • Node.js: (run node --version)
    • @noir-lang/noir_wasm: (from yarn.lock)
    • @noir-lang/barretenberg: (from yarn.lock)
    • @noir-lang/aztec_backend: (from yarn.lock)

    Additional context

    (If applicable.)

    bug 
    opened by vezenovm 0
  • Add `+=`, `-=`, `*=`, `/=`, `%=`, and others.

    Add `+=`, `-=`, `*=`, `/=`, `%=`, and others.

    Problem

    Noir lacks the common shorthand for abbreviating <pattern> = <pattern> <binop> <expression> as <pattern> <binop>= <expression>.

    Solution

    Implement this into the language. Since the syntax of patterns on the lhs of an assignment is already restricted, this can be implemented as a relatively straightforward desugaring in the parser without worrying about side effects from re-evaluating the pattern after the Ast node is duplicated.

    enhancement 
    opened by jfecher 0
  • CLI nargo prove fails on Apple Silicon

    CLI nargo prove fails on Apple Silicon

    Description

    Tried to generate proof of the Hello World circuit described

    Aim

    I am trying to build a proof with the nargo CLI

    Expected behavior

    nargo prove should generate a proof

    Bug

    It throws a panic in CRS

    To reproduce

    (Describe the steps to reproduce the behavior.)

    1. Install CLI from binaries on Apple Silicon https://noir-lang.github.io/book/getting_started/nargo/installation.html#option-1-binaries
    2. Follow steps to generate a project and a proof https://noir-lang.github.io/book/getting_started/hello_world.html
    3. It throws an error at proof generation step

    Environment

    • OS: mac OS Monterey Version 12.6

    For nargo users

    • noir-lang/noir commit cloned:
    • Proving backend
      • [ ] default
        • Clang: (run clang --version)
      • [ ] wasm-base

    For TypeScript users

    • Node.js: (run node --version)
    • @noir-lang/noir_wasm: (from yarn.lock)
    • @noir-lang/barretenberg: (from yarn.lock)
    • @noir-lang/aztec_backend: (from yarn.lock)

    Additional context

    (If applicable.)

    bug 
    opened by harshadptl 1
Owner
null
gRPC client/server for zero-knowledge proof authentication Chaum Pederson Zero-Knowledge Proof in Rust

gRPC client/server for zero-knowledge proof authentication Chaum Pederson Zero-Knowledge Proof in Rust. Chaum Pederson is a zero-knowledge proof proto

Advaita Saha 4 Jun 12, 2023
STARK - SNARK recursive zero knowledge proofs, combinaison of the Winterfell library and the Circom language

STARK - SNARK recursive proofs The point of this library is to combine the SNARK and STARK computation arguments of knowledge, namely the Winterfell l

Victor Colomb 68 Dec 5, 2022
Safeguard your financial privacy with zero-knowledge proofs.

Spinner The Spinner project (https://spinner.cash) takes a privacy first approach to protect users crypto assets. It is a layer-2 protocol built on th

Spinner 21 Dec 28, 2022
RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture.

RISC Zero WARNING: This software is still experimental, we do not recommend it for production use (see Security section). RISC Zero is a zero-knowledg

RISC Zero 653 Jan 3, 2023
Zero-Knowledge Assembly language and compiler

zkAsm A Zero-Knowledge circuit assembly language, designed to represent Zero-Knowledge circuits in a compressed format, to be stored on blockchains. I

null 1 Dec 30, 2021
A fast zero-knowledge proof friendly Move language runtime environment.

zkMove Lite zkMove Lite is a lightweight zero-knowledge proof friendly Move language virtual machine. Move bytecode is automatically "compiled" into c

YoungRocks 43 May 20, 2023
A Software Development Kit (SDK) for Zero-Knowledge Transactions

Aleo SDK The Aleo SDK is a developer framework to make it simple to create a new account, craft a transaction, and broadcast it to the network. Table

Aleo 270 Jan 5, 2023
Zerocaf: A library built for EC operations in Zero Knowledge.

Dusk-Zerocaf WARNING: WIP Repo. Fast, efficient and bulletproof-friendly cryptographic operations. This repository contains an implementation of the S

Dusk Network 50 Oct 31, 2022
OpenZKP - pure Rust implementations of Zero-Knowledge Proof systems.

OpenZKP OpenZKP - pure Rust implementations of Zero-Knowledge Proof systems. Overview Project current implements ?? the Stark protocol (see its readme

0x 529 Jan 5, 2023
Vector OLE and zero-knowledge for Z2k.

Mozzarella Benchmarking Code This repository contains the code developed for the benchmarking experiments in our paper: "Moz $\mathbb{Z}_{2^k}$ arella

null 7 Dec 20, 2022
Zero Knowledge Light Client Implementation by Zpoken team.

zkp for chain state Prerecusites This project requires using the nightly Rust toolchain, which can be used by default in this way: rustup default nigh

Zpoken 40 Mar 6, 2023
Spartan2: High-speed zero-knowledge SNARKs.

Spartan2: High-speed zero-knowledge SNARKs. Spartan is a high-speed zkSNARK, where a zkSNARK is type cryptographic proof system that enables a prover

Microsoft 7 Jul 28, 2023
Implementation of zero-knowledge proof circuits for Tendermint.

Tendermint X Implementation of zero-knowledge proof circuits for Tendermint. Overview Tendermint X's core contract is TendermintX, which stores the he

Succinct 3 Nov 8, 2023
Noir implementation of RSA-verify

noir-rsa This repository contains an implementation of a RSA signature verify for the Noir language. Currently supports pkcs1v15 + sha256 and exponent

Set Labs 5 Jul 22, 2023
Multilayered Linkable Spontaneous Anonymous Group - Implemented as is from paper. Not Monero specific

MLSAG This is a pure Rust implementation of the Multilayered Linkable Spontaneous Anonymous Group construction. This implementation has not been revie

Crate Crypto 19 Dec 4, 2022
ZKP fork for rust-secp256k1, adds wrappers for range proofs, pedersen commitments, etc

rust-secp256k1 rust-secp256k1 is a wrapper around libsecp256k1, a C library by Peter Wuille for producing ECDSA signatures using the SECG curve secp25

null 53 Dec 19, 2022
Bulletproofs and Bulletproofs+ Rust implementation for Aggregated Range Proofs over multiple elliptic curves

Bulletproofs This library implements Bulletproofs+ and Bulletproofs aggregated range proofs with multi-exponent verification. The library supports mul

[ZenGo X] 62 Dec 13, 2022
P2P Network to verify authorship & ownership, store & deliver proofs.

Anagolay Network Node Anagolay is a next-generation framework for ownerships, copyrights and digital licenses. ?? Local Development The installation a

Anagolay Network 5 May 30, 2022
The Light Protocol program verifies zkSNARK proofs to enable anonymous transactions on Solana.

Light Protocol DISCLAIMER: THIS SOFTWARE IS NOT AUDITED. Do not use in production! Tests cd ./program && cargo test-bpf deposit_should_succeed cd ./pr

null 36 Dec 17, 2022