Eternally liquid. Forward compatible. Nested, conditional, & Multi-resourced NFTs.

Overview

RMRK Substrate

Rust Setup

First, complete the basic Rust setup instructions.

Run

Use Rust's native cargo command to build and launch the template node:

cargo run --release -- --dev --tmp

Build

The cargo run command will perform an initial build. Use the following command to build the node without launching it:

cargo build --release

Embedded Docs

Once the project has been built, the following command can be used to explore all parameters and subcommands:

./target/release/rmrk-substrate -h

Run

The provided cargo run command will launch a temporary node and its state will be discarded after you terminate the process. After the project has been built, there are other ways to launch the node.

Single-Node Development Chain

This command will start the single-node development chain with persistent state:

./target/release/rmrk-substrate --dev

Purge the development chain's state:

./target/release/rmrk-substrate purge-chain --dev

Start the development chain with detailed logging:

RUST_BACKTRACE=1 ./target/release/rmrk-substrate -ldebug --dev

Connect with Polkadot-JS Apps Front-end

Once the node template is running locally, you can connect it with Polkadot-JS Apps front-end to interact with your chain. Click here connecting the Apps to your local node template.

Multi-Node Local Testnet

If you want to see the multi-node consensus algorithm in action, refer to our Start a Private Network tutorial.

RMRK Pallets structure

A Substrate project such as this consists of a number of components that are spread across a few directories.

Node

A blockchain node is an application that allows users to participate in a blockchain network. Substrate-based blockchain nodes expose a number of capabilities:

  • Networking: Substrate nodes use the libp2p networking stack to allow the nodes in the network to communicate with one another.
  • Consensus: Blockchains must have a way to come to consensus on the state of the network. Substrate makes it possible to supply custom consensus engines and also ships with several consensus mechanisms that have been built on top of Web3 Foundation research.
  • RPC Server: A remote procedure call (RPC) server is used to interact with Substrate nodes.

There are several files in the node directory - take special note of the following:

  • chain_spec.rs: A chain specification is a source code file that defines a Substrate chain's initial (genesis) state. Chain specifications are useful for development and testing, and critical when architecting the launch of a production chain. Take note of the development_config and testnet_genesis functions, which are used to define the genesis state for the local development chain configuration. These functions identify some well-known accounts and use them to configure the blockchain's initial state.
  • service.rs: This file defines the node implementation. Take note of the libraries that this file imports and the names of the functions it invokes. In particular, there are references to consensus-related topics, such as the longest chain rule, the Aura block authoring mechanism and the GRANDPA finality gadget.

After the node has been built, refer to the embedded documentation to learn more about the capabilities and configuration parameters that it exposes:

./target/release/rmrk-substrate --help

Runtime

In Substrate, the terms "runtime" and "state transition function" are analogous - they refer to the core logic of the blockchain that is responsible for validating blocks and executing the state changes they define. The Substrate project in this repository uses the FRAME framework to construct a blockchain runtime. FRAME allows runtime developers to declare domain-specific logic in modules called "pallets". At the heart of FRAME is a helpful macro language that makes it easy to create pallets and flexibly compose them to create blockchains that can address a variety of needs.

Review the FRAME runtime implementation included in this template and note the following:

  • This file configures several pallets to include in the runtime. Each pallet configuration is defined by a code block that begins with impl $PALLET_NAME::Config for Runtime.
  • The pallets are composed into a single runtime by way of the construct_runtime! macro, which is part of the core FRAME Support library.

Pallets

The runtime in this project is constructed using many FRAME pallets that ship with the core Substrate repository and a template pallet that is defined in the pallets directory.

A FRAME pallet is compromised of a number of blockchain primitives:

  • Storage: FRAME defines a rich set of powerful storage abstractions that makes it easy to use Substrate's efficient key-value database to manage the evolving state of a blockchain.
  • Dispatchables: FRAME pallets define special types of functions that can be invoked (dispatched) from outside of the runtime in order to update its state.
  • Events: Substrate uses events and errors to notify users of important changes in the runtime.
  • Errors: When a dispatchable fails, it returns an error.
  • Config: The Config configuration interface is used to define the types and parameters upon which a FRAME pallet depends.

Run in Docker

First, install Docker and Docker Compose.

Then run the following command to start a single node development chain.

./scripts/docker_run.sh

This command will firstly compile your code, and then start a local development network. You can also replace the default command (cargo build --release && ./target/release/rmrk-substrate --dev --ws-external) by appending your own. A few useful ones are as follow.

# Run Substrate node without re-compiling
./scripts/docker_run.sh ./target/release/rmrk-substrate --dev --ws-external

# Purge the local dev chain
./scripts/docker_run.sh ./target/release/rmrk-substrate purge-chain --dev

# Check whether the code is compilable
./scripts/docker_run.sh cargo check
Comments
  • Parachain(s) integration question

    Parachain(s) integration question

    This is a question about rmrk-core pallet: How heavily is RMRK frontend code married to calls like mint_nft, send, add_resource, as well as architecture when the nesting is the function of the same pallet as nft logic? For example, could we replace api.tx.rmrkCore.send with api.tx.unqiue.transfer, and api.tx.rmrkCore.addResource with api.tx.nesting.add?

    question 
    opened by gregzaitsev 9
  • Test Coverage

    Test Coverage

    Hi, I am new to open source development and would like to participate. Bruno said I should not mention Jira/Confluence but he recommended me creating an issue here :-)

    With grcov I have created a test coverage html tree --> https://github.com/SiAlDev/rmrk-substrate/tree/main/target/debug/coverage

    I could work on covering all sorts of test scenarios, different data sets etc. I just need a bit guidance to get going.

    Best, Silvio

    opened by SiAlDev 8
  • Lazy ownership tree based on pallet-unique's owner

    Lazy ownership tree based on pallet-unique's owner

    An important feature of RMRK spec is to allow any NFT to own other NFTs. This is called nested NFTs. It makes NFTs composable. Typical use cases are:

    • A set of NFTs can be combined as a single object that can be send and sell in an auction as a whole
    • By following some special rules (defined in BASE), some NFTs can be combined in a meaningful way that produce some special effects. E.g. glasses can be equipped to a Kanaria bird and can be rendered as a complete portrait

    Current approach

    Now the basic NFT is implemented by pallet-unique. On top of it, RMRKCore offers the advanced feature including the nested NFT. The basic logic can be described as below:

    • RMRKCore stores an additional layer of the NFT ownership
      • Let's call it rmrk-owner, in contrast to unique-owner
    • Every RMRK NFT has a rmrk-owner
    • Rmrk-owner can be either AccountId or CollectionIdAndNftTuple (Nft for short)
    • When minted, both rmrk-owner and unique-owner is set to the creator
    • When burned, it recursively burn all the children NFTs
    • When transferred, the destination can be either an AccountId or Nft
      • When transfer to an AccountId, both rmrk-owner and unique-owner are set to the new owner. Then it recursively set unique-owenr of all its children to the new owner.
      • When transfer to a Nft, the rmrk-owner is set to the dest NFT, the unique-owner is set to the unique-owner of the dest NFT. Then it recursively set the unique-owenr of all its children to the the same account.

    This approach is suboptimal in three aspects:

    • rmrk-owner and unique-owner is theoretically redundant -- the former is a superset of the later. It increases the difficulty to maintain both relationship in sync.
    • The burn and transfer operation has to be recursive because we need to proactively update the ownership of the NFTs in unique (unique-owner). It's an overhead.
    • If we allow users the direct access to the underlying pallet-unique (or through xcm), it's likely they can break the sync between unique and RMRK

    So here, we'd prefer an approach that can minimize the redundancy of the data and reduce the complexity and the overhead to maintain the data structure.

    Proposal

    The proposal is simple: use the ownership in pallet-unique to trace the hierarchy of the NFTs.

    The ownership in pallet-unique is represented by AccountId. At the first glance, it doesn't fit our requirement which is to allow a NFT to own other NFTs. However the AccountId isn't necessary to be a real wallet. It's just an address that can be backed by anything. In our case, we can create "virtual account" for each NFT instance by mapping their (CollectionId, NftId) tuple to an AccountId via hashing.

    Once each NFT has a virtual AccountId associated, we can safely establish the hierarchy with pallet-unique's builtin ownership:

    • When minted: just mint a unique NFT
    • When transferred: simply transfer the NFT to the new owner
      • When transfer to an AccountId, just call unique's transfer
      • When transfer to a NFT, generate its corresponding virtual AccountId, and then we do the same transfer as above]
    • When burned: check the NFT has no child, then just burn it

    In this way, each operation is O(1) except the ownership check (discussed below). Even if you do a transfer of a NFT with heavy children, or put it to a sale, only the ownership of the top one needs to be changed.

    The virtual account trick

    This is a common trick widely used in Substrate. The virtual accounts are allocated to on-chain multisig wallets, anonymous proxies, and even some pallets. The benefit is that the AccountId unifies the identity representation for all kinds of entities, It can fit to any place that accepts an account to manage ownership.

    The virtual accounts are usually just hard-coded string or hashes without any private key associated. For example, the AccountId of a multisig wallet can be generated by hash('multisig' ++ encode([owners]). Similarly, we can generate the virtual account id for each NFT like below:

    fn nft_to_account_id<AccountId: Codec>(col: CollectionId, nft: NftId) -> AccountId {
        let preimage = (col, nft).encode();
        let hash = blake2_256(&preimage);
        (b"RmrkNft/", hash)
            .using_encoded(|b| AccountId::decode(&mut TrailingZeroInput::new(b)))
            .expect("Decoding with trailing zero never fails; qed.")
    }
    

    In the above example, "RmrkNft/" takes 8 bytes, and if AccountId is 32 bytes, we got 24 bytes entropy left for the hash of the nft ids.

    Or if we are sure we can get an address space large enough to directly include the ids (e.g. AccountId32 can include CollectionId + Nft Id, which is just 8 bytes), we can just encode the ids to the account id directly. This enables reverse lookup much easier:

    fn nft_to_account_id<AccountId: Codec>(col: CollectionId, nft: NftId) -> AccountId {
        (b"RmrkNft/", col, nft)
            .using_encoded(|b| AccountId::decode(&mut TrailingZeroInput::new(b)))
            .expect("Decoding with trailing zero never fails; qed.")
    }
    
    fn decode_nft_account_id<AccountId: Codec>(account_id: AccountId) -> Option<(CollectionId, NftId)> {
        let (prefix, tuple, suffix) = account_id
            .using_encoded(|mut b| {
                let slice = &mut b;
                let r = <([u8; 8], (CollectionId, NftId))>::decode(slice);
                r.map(|(prefix, tuple)| (prefix, tuple, slice.to_vec()))
            })
            .ok()?;
        // Check prefix and suffix to avoid collision attack
        if &prefix == b"RmrkNft/" && suffix.iter().all(|&x| x == 0) {
            Some(tuple)
        } else {
            None
        }
    }
    

    Ownership check

    A drawback of this approach is that given a NFT in a tree, you cannot easily find if the user is owner if the tree. This can be solved by a reverse lookup:

    • Save a reverse map: the map from the virtual accounts to the NFTs
      • (Actually can be saved if decode_nft_account_id is available)
    • When finding the tree owner, we start from a NFT, check if its owner is in the reverse map
      • If no, this NFT is the root, and its owner is the result
      • If yes, we recursively lookup the parent NFT until we reach to the root

    In this way, a lookup is O(h) where h is the depth of the NFT in the tree. It can be described in pseudocode:

    type AccountPreimage = StorageMap<_, T::AccountId, TwoX64Concate, (CollectionId, NftId)>;
    
    fn lookup_root(col: CollectionId, nft: NftId) -> (T::AccountId, (CollectionId,NftId)) {
        let parent = pallet_uniques::Pallet::<T>::owner(col, nft);
        match AccountPreimage::<T>::get(parent) {
            None => (parent, (col, nft)),
            Some((col, nft)) => lookup_root_owner(col, nft),
        }
    }
    

    Children

    When burning a NFT itself, we need to ensure the is has no child. This can be done by either

    • require no child under it,
    • or automatically transfer its children to the caller before burning,
    • or automatically burn all the children before burning.

    It's easy to recursively burn all the children. To track the children of a NFT, a straightforward solution is to keep a record of all its children, and update the list dynamically when adding or removing a child:

    type Children = StorageMap<(CollectionId, NftId), TwoX64Concat, Vec<(CollectionId, Nft)>, ValueQuery>;
    
    fn add_child(parent: (CollectionId, NftId), child: (CollectionId, NftId)) {
        Children::<T>::mutate(parent, |v| {
            v.push_back(child)
        });
    }
    fn remove_child(parent: (CollectionId, NftId), child: (CollectionId, NftId)) {
        Children::<T>::mutate(parent, |v| {
            *v = v.iter().filter(|nft| nft != child).collect();
        });
    }
    fn has_child(parent: (CollectionId, NftId)) -> bool {
        !Children::<T>::contains_key(parent).empty()
    }
    

    Caveat: Note that this essentially builds an index for the ownership of the nfts. It can only get updated if the change of the ownership was initiated by the RMRK pallet. A bad case is that if a user transfer a NFT to another nft via pallet-uniques directly, the RMRK pallet will know nothing about this ownership change. The children index will be out-of-sync, and therefore when burning the parent nft, some children may be missed.

    A solution is to somehow disallow pallet-uniques to transfer an NFT to another NFT. Or we can add a notification handler to pallet-uniques to call back whenever a transfer was done.

    Pseudocode

    fn mint(origin: OriginFor<T>, owner: T::AccountId, col: CollectionId, nft: NftId) {
        ensure_signed(origin)?;
        pallet_uniques::Pallet::<T>::mint(origin, owner, col, nft)?;
        Ok(())
    }
    
    fn burn(origin: OriginFor<T>, owner: T::AccountId, col: CollectionId, nft: NftId) {
        let who = ensure_signed(origin)?;
        // Check ownership
        let (root_owner, _) = lookup_root_owner(col, nft);
        ensure!(who == root_owner, Error::BadOrigin);
        // Check no children
        ensure!(!has_child((col, nft)), Error::CannotBurnNftWithChild);
        // Burn
        let parent = pallet_uniques::Pallet::<T>::owner(col, nft);
        pallet_uniques::Pallet::<T>::burn(Origin::signed(parent), col, nft);
        // Maintain the children
        if parent != root_owner {
            remove_child(parent, (col, nft));
        }
        Ok(())
    }
    
    fn transfer(origin: OriginFor<T>, col: CollectionId, nft: NftId, dest: AccountIdOrNftTuple) {
        let who = ensure_signed(origin)?;
        // Check ownership
        let (root_owner, root_nft) = lookup_root_owner(col, nft);
        ensure!(who == root_owner, Error::BadOrigin);
        // Prepare transfer
        let dest_account = match dest {
            AccountId(id) => id,
            NftTuple(c, n) => {
                // Circle detection
                let (_, dest_root_nft) = lookup_root_owner(c, n);            
                ensure!(root_nft != dest_root_nft, Error::CircleDetected);
                // Convert to virtual account
                nft_to_account_id(c, n)
            },
        };
        let owner = pallet_uniques::<T>::owner(col, nft);
        pallet_uniques::<T>::transfer(
            Origin::signed(owner),
            col, nft,
            dest_account,
        )?;
        Ok(())
    }
    

    Update notes

    • 2021-12-31
      • Update the pseudocode in the "virtual account" section, also add an alternative encode scheme
      • Update burning and children indexing section according to Bruno's feedback
      • Correct the symbol names of pallet-uniques
      • Fix typos
    opened by h4x3rotab 8
  • allow minting directly to nft

    allow minting directly to nft

    This is basic implementation but some missing pieces

    • [x] need to add tests:
    • [x] mint to nft works,
    • [x] mint to nft fails if nft doesn't exist,
    • [x] mint to non-owned nft results in pending state
    • [x] other tests?
    • [x] currently i think minting to a non-owned nft with resources will leave each resource in pending state, which is likely undesirable. we want the resources accepted on minted adds. two possible solutions: (1) implement an auto-accept right after the resource add inside of the mint [con: bunch of events, maybe more weight], (2) add an during_mint parameter to resource_add, which would override the requirement for sender to equal owner. i'd vote for option 2.
    opened by bmacer 7
  • NFT Lock Mechanism

    NFT Lock Mechanism

    Targets

    • [x] Temporarily add Uniques pallet
    • [x] Add a dependency injection Trait to Uniques pallet for RMRK Core pallet to implement
    • [x] Implement Lock logic in RMRK Core
    • [x] Add tests to ensure Locks operate correctly
    • [x] Rebase changes with main branch
    • [x] Submit PR for Uniques pallet in Substrate repo to add the dependency injection Trait
    • [x] After PR accepted remove Uniques pallet from RMRK repo
    • [x] Make PR suggested changes
    • [x] Update Cargo to polkadot-v0.9.22 when it is released with Locker Trait

    Procedure

    • Temporarily add a local copy of the Uniques pallet to the repo
    • Implement the dependency injection Trait for the Lock and implement a function to default to false
    • Implement the new Lock Trait in RMRK Core and add logic to configure the Lock status as a bool
    • Add error checks and tests to ensure a NFT has the following limitations:
      • no adding children
      • no changing priority
      • no adding resources
      • no equip/unequip
      • no mutating attributes or properties
    • Psuedo code ex: https://gist.github.com/h4x3rotab/152d05c9b5a18462e8dfe2b299b58db8

    RMRK Spec

    opened by HashWarlock 7
  • Additional parameters for burn_nft

    Additional parameters for burn_nft

    It seems that the current weight of burn_nft is too small because it is a recursive transaction.

    A benchmark should be written to generate the appropriate weight for the transaction. Despite it can be written later, there is an architectural question -- what signature should the burn_nft have?

    I believe it is essential to have additional parameters like max_recursions to actually be able to write a correct benchmark. For burn_nft, it would be needed to have even 2 additional parameters (like max_burns and max_breadth) because the weight of actually burning an NFT differs from the weight of traversing an NFT bundle during the burn transaction.

    Also, it would be good to use only one argument of some struct type. Here is why:

    • You can't mess up the order of the limit arguments (like max_burns and max_breadth)
    • You can easily add something to it
    • You can make optional arguments with default values
    opened by mrshiposha 6
  • Add Resource During Mint Process

    Add Resource During Mint Process

    Currently we have to add the Resource after Minting a NFT in Phala World which requires the owner to accept_resource to get a resource for the NFT. I would suggest to add another parameter for a Resource to be added during the minting process to avoid adding a Resource & requiring the new owner to accept the NFT.

    enhancement 
    opened by HashWarlock 6
  • Bug/93 update vec to bounded vec

    Bug/93 update vec to bounded vec

    addresses #93. We may be able to alias some of the generics to make it a bit more readable, but this should pass tests for all of equip, market and core, and should successfully remove #[pallet::without_storage_info]

    opened by bmacer 6
  • RPC and tests

    RPC and tests

    RMRK RPC API and Integration Tests

    RPC

    The RMRK RPC description can be found in docs/rpc.md

    The RPC is declared in the rmrk-rpc crate.

    The Runtime implements the RPC API in the runtime/src/lib.rs inside the impl_runtime_apis macro. The node exposes the RPC interface described in the rpc.md. The RPC interface implementation passes each RPC call to the RMRK runtime API. The RPC interface declaration and implementation can be found in the file node/src/rpc.rs.

    Changes to the RMRK
    • The RMRK types now can be serialized
    • Several auxiliary functions were added to the pallets to implement the RMRK runtime API

    Integration Tests

    The Integration Tests are located in the tests/src directory. They use the RPC interface to fetch data from the node.

    • All transactions used in the tests are located in tests/src/util/tx.ts.
    • All "fetch" functions like getNft are located in tests/src/util/fetch.ts. Here you can see an example of the RPC interface usage.
    • All "helper" functions are located in tests/src/util/helpers.ts.
    • Type augmentation located in tests/src/interfaces, autogenerated, a lot of lines of code :)
    How to start the tests
    # (In the rmrk-substrate directory)
    
    # Run the node
    cargo run --release -- --dev --tmp
    
    # (In another terminal)
    # Start the tests
    cd tests && yarn test
    

    Instead of running all the tests at once, you can run a separate test if you like. For instance, you can type yarn testSendNft to run the tests/src/sendNft.test.ts test.

    All the tests have the following name pattern: <test-name>.test.ts. To run a separate test you can type the following: yarn test<test-name>

    opened by mrshiposha 5
  • Convert Collection ID and NFT ID to BoundedVec

    Convert Collection ID and NFT ID to BoundedVec

    I believe we should convert Collection ID and NFT ID values from u32 to BoundedVec. The current way is incrementally indexed (Collection 0, 1, 2, etc). Proposal is to switch to user-defined symbols of some max length. I think one drawback would be storage size, but the benefits would be usability and likely cross-chain functionality.

    Consider create collection KANBIRD and NFT of this collection SUPERFOUNDER_1. The collection ID/NFT ID tuple would be (KANBIRD, SUPERFOUNDER_1) as opposed to (0, 0). Obviously this would make interaction with the pallet more user-friendly. Also, I suspect cross-chain activities will be easier. Transferring across chain would only (conceptually) fail if the NFT (KANBIRD, SUPERFOUNDER_1) already exists on the destination change (less likely than (0, 0) obviously).

    There are a few challenges in developing this way, as we currently have some reliance on the numbering of Collection ID (specifically I'm thinking of collection max count) but I believe this can be overcome pretty easily.

    Resource ID is already implemented this way, and seems to be surviving like this.

    question 
    opened by bmacer 5
  • Forbid sending an equipped NFT

    Forbid sending an equipped NFT

    I suggest forbidding sending of an equipped NFT. I believe sending an equipped NFT can lead to exploits like #185... Also, I think that it is logically more accurate to unequip an equipped NFT first and then send it to some user/token.

    opened by mrshiposha 4
  • Prevent Listing Frozen NFTs

    Prevent Listing Frozen NFTs

    Currently, if a NFT is frozen from a freeze call in the uniques pallet then the NFT can still be listed. A purchase will not go through but the user can still list_nft. This would require an additional read check in the list_nft function.

    opened by HashWarlock 0
  • Improve nesting / recursive calls

    Improve nesting / recursive calls

    Currently there few places utilising unbound recursion ( ie lookup_root_owner ). In order to follow best practices and implement proper benchmarking we need to bound recursive calls ( ideally avoid them in favour of loops ).

    Proposed improvement is to integrate concept of budgets, as seen in Unique Network example

    opened by ilionic 0
  • Resource* events missing CollectionId

    Resource* events missing CollectionId

    None of ResourceAdded | ResourceAccepted | ResourceRemoval | ResourceRemovalAccepted have a collection_id value in the event.

    This makes it impossible to attribute a Resource* event to a given NFT, as both a collection_id and nft_id are needed to correctly identify it.

    opened by boyswan 0
  • Benchmarking RMRK-core

    Benchmarking RMRK-core

    This PR introduces benchmark test for pallet-rmrk-core

    Note!!! Each parachain need to re-run benchmark tests on their referent HW The major change in the most of the files is that following construct could not be used in the benchmark! macro

    where
    	T: pallet_uniques::Config<CollectionId = CollectionId, ItemId = NftId>,
    
    • The pallet-uniques associated type CollectionId has trait bound AtLeast32BitUnsigned and with it From<u32>. To use/cast pallet defined CollectionId=u32 it is enough to call .into()
    • The pallet-uniques associated type ItemId does not support From<u32> and it is therefore used as T::ItemId replacing use of locally defined NftId
    • [x] unittest updated and passing
    • [x] benchmark tests work on runtime and on MockRuntime
    • [x] runtime lib updated with Weights
    • [ ] if pallet supports iteration (as is today) benchmark tests need to be revisited to include iteration steps.
    opened by Maar-io 0
Releases(0.0.1)
Owner
RMRK Team
RMRK.app team
RMRK Team
A wallet library for Elements / Liquid written in Rust!

EDK Elements Dev Kit A modern, lightweight, descriptor-based wallet library for Elements / Liquid written in Rust! Inspired by BDK for Elements & Liqu

luca vaccaro 11 Dec 11, 2021
Marinde Anchor-Based, first on mainnet, liquid-staking-program and mSOL->SOL swap pool

marinade-anchor Marinade-finance liquid staking program for the Solana blockchain Audits & Code Review Kudelski Security: https://marinade.finance/Kud

Marinade.Finance 40 Sep 1, 2022
Terra liquid staking derivative

Terra liquid staking derivative. Of the community, by the community, for the community.

null 35 Sep 19, 2022
A solana program designed to mint Metaplex compliant NFTs.

Solana Minter My program used to mint Amoebits & Amoebit Minis. I wrote it from scratch using the hello-world program as an example & base. Features C

vlawmz 33 Sep 9, 2022
A reference NFT Staking Program & Client for Solana NFTs

This is a reference NFT Staking Program & Client for Solana NFTs. This program is compatible with both Candy Machine v1 and Candy Machine v2 NFTs.

Tracy Adams 74 Sep 17, 2022
Galleries of NFTs using Solana and Rust for contracts

About this Package created to simplify the process of parsing NFTs on Solana. It consists of: Package basic things like fetch all NFTs for specific wa

Andrew Scott 1 Jan 28, 2022
EVM compatible chain with NPoS/PoC consensus

Reef Chain Reef chain is written in Rust. A basic familiarity with Rust tooling is required. To learn more about Reef chain, please refer to Documenta

Reef Finance 142 Sep 7, 2022
Trustworthy encrypted command line authenticator app compatible with multiple backups.

cotp - command line totp authenticator I believe that security is of paramount importance, especially in this digital world. I created cotp because I

Reply 56 Sep 20, 2022
Decrypt your LUKS partition using a FIDO2 compatible authenticator

fido2luks This will allow you to unlock your LUKS encrypted disk with an FIDO2 compatible key. Note: This has only been tested under Fedora 31, Ubuntu

null 117 Aug 21, 2022
An Ethereum compatible Substrate blockchain for bounties and governance for the Devcash community.

Substrate Node Template A fresh FRAME-based Substrate node, ready for hacking ?? Getting Started Follow the steps below to get started with the Node T

null 4 Mar 30, 2022
Fiddi is a command line tool that does the boring and complex process of checking and processing/watching transactions on EVM compatible Blockchain.

Fiddi is a command line tool that does the boring and complex process of checking and processing/watching transactions on EVM compatible Blockchain.

Ahmad Abdullahi Adamu 6 Sep 16, 2022
Selendra is a multichains interoperable nominated Proof-of-Stake network for developing and running Substrate-based and EVM compatible blockchain applications.

Selendra An interoperable nominated Proof-of-Stake network for developing and running Substrate-based and EVM compatible blockchain applications. Read

Selendra 15 Sep 19, 2022
A fast and secure multi protocol honeypot.

Medusa A fast and secure multi protocol honeypot that can mimic realistic devices running ssh, telnet, http, https or any other tcp and udp servers. W

Simone Margaritelli 263 Sep 18, 2022
multi-market crank for serum-dex

A performance and cost optimized serum-dex crank that allows combining multiple market cranking instructions into a single transaction, while concurrently generating the crank instructions allowing for increased throughput.

SolFarm 29 May 2, 2022
An encrypted multi client messaging system written in pure Rust

?? Preamble This is a pure Rust multi-client encrypted messaging system, also known as Edode's Secured Messaging System. It is an end-to-end(s) commun

Edode 3 Sep 16, 2022
Multi Party Key Management System (KMS) for Secp256k1 Elliptic curve based digital signatures.

Key Management System (KMS) for curve Secp256k1 Multi Party Key Management System (KMS) for Secp256k1 Elliptic curve based digital signatures. Introdu

[ZenGo X] 56 Sep 9, 2022
Rust implementation of multi-party Schnorr signatures over elliptic curves.

Multi Party Schnorr Signatures This library contains several Rust implementations of multi-signature Schnorr schemes. Generally speaking, these scheme

[ZenGo X] 141 Aug 29, 2022
A standalone Aleo prover build upon snarkOS and snarkVM, with multi-threading optimization

Aleo Light Prover Introduction A standalone Aleo prover build upon snarkOS and snarkVM, with multi-threading optimization. It's called "light" because

Haruka Ma 21 Aug 15, 2022
Two-party and multi-party ECDSA protocols based on class group with Rust

CG-MPC-ECDSA This project aims to implement two-party and multi-party ECDSA protocols based on class group with Rust. It currently includes schemes de

LatticeX Foundation 16 Mar 17, 2022