Eternally liquid. Forward compatible. Nested, conditional, & Multi-resourced NFTs.

Overview

RMRK Substrate

Rust Setup

First, complete the basic Rust setup instructions.

Run

Use Rust's native cargo command to build and launch the template node:

cargo run --release -- --dev --tmp

Build

The cargo run command will perform an initial build. Use the following command to build the node without launching it:

cargo build --release

Embedded Docs

Once the project has been built, the following command can be used to explore all parameters and subcommands:

./target/release/rmrk-substrate -h

Run

The provided cargo run command will launch a temporary node and its state will be discarded after you terminate the process. After the project has been built, there are other ways to launch the node.

Single-Node Development Chain

This command will start the single-node development chain with persistent state:

./target/release/rmrk-substrate --dev

Purge the development chain's state:

./target/release/rmrk-substrate purge-chain --dev

Start the development chain with detailed logging:

RUST_BACKTRACE=1 ./target/release/rmrk-substrate -ldebug --dev

Connect with Polkadot-JS Apps Front-end

Once the node template is running locally, you can connect it with Polkadot-JS Apps front-end to interact with your chain. Click here connecting the Apps to your local node template.

Multi-Node Local Testnet

If you want to see the multi-node consensus algorithm in action, refer to our Start a Private Network tutorial.

RMRK Pallets structure

A Substrate project such as this consists of a number of components that are spread across a few directories.

Node

A blockchain node is an application that allows users to participate in a blockchain network. Substrate-based blockchain nodes expose a number of capabilities:

  • Networking: Substrate nodes use the libp2p networking stack to allow the nodes in the network to communicate with one another.
  • Consensus: Blockchains must have a way to come to consensus on the state of the network. Substrate makes it possible to supply custom consensus engines and also ships with several consensus mechanisms that have been built on top of Web3 Foundation research.
  • RPC Server: A remote procedure call (RPC) server is used to interact with Substrate nodes.

There are several files in the node directory - take special note of the following:

  • chain_spec.rs: A chain specification is a source code file that defines a Substrate chain's initial (genesis) state. Chain specifications are useful for development and testing, and critical when architecting the launch of a production chain. Take note of the development_config and testnet_genesis functions, which are used to define the genesis state for the local development chain configuration. These functions identify some well-known accounts and use them to configure the blockchain's initial state.
  • service.rs: This file defines the node implementation. Take note of the libraries that this file imports and the names of the functions it invokes. In particular, there are references to consensus-related topics, such as the longest chain rule, the Aura block authoring mechanism and the GRANDPA finality gadget.

After the node has been built, refer to the embedded documentation to learn more about the capabilities and configuration parameters that it exposes:

./target/release/rmrk-substrate --help

Runtime

In Substrate, the terms "runtime" and "state transition function" are analogous - they refer to the core logic of the blockchain that is responsible for validating blocks and executing the state changes they define. The Substrate project in this repository uses the FRAME framework to construct a blockchain runtime. FRAME allows runtime developers to declare domain-specific logic in modules called "pallets". At the heart of FRAME is a helpful macro language that makes it easy to create pallets and flexibly compose them to create blockchains that can address a variety of needs.

Review the FRAME runtime implementation included in this template and note the following:

  • This file configures several pallets to include in the runtime. Each pallet configuration is defined by a code block that begins with impl $PALLET_NAME::Config for Runtime.
  • The pallets are composed into a single runtime by way of the construct_runtime! macro, which is part of the core FRAME Support library.

Pallets

The runtime in this project is constructed using many FRAME pallets that ship with the core Substrate repository and a template pallet that is defined in the pallets directory.

A FRAME pallet is compromised of a number of blockchain primitives:

  • Storage: FRAME defines a rich set of powerful storage abstractions that makes it easy to use Substrate's efficient key-value database to manage the evolving state of a blockchain.
  • Dispatchables: FRAME pallets define special types of functions that can be invoked (dispatched) from outside of the runtime in order to update its state.
  • Events: Substrate uses events and errors to notify users of important changes in the runtime.
  • Errors: When a dispatchable fails, it returns an error.
  • Config: The Config configuration interface is used to define the types and parameters upon which a FRAME pallet depends.

Run in Docker

First, install Docker and Docker Compose.

Then run the following command to start a single node development chain.

./scripts/docker_run.sh

This command will firstly compile your code, and then start a local development network. You can also replace the default command (cargo build --release && ./target/release/rmrk-substrate --dev --ws-external) by appending your own. A few useful ones are as follow.

# Run Substrate node without re-compiling
./scripts/docker_run.sh ./target/release/rmrk-substrate --dev --ws-external

# Purge the local dev chain
./scripts/docker_run.sh ./target/release/rmrk-substrate purge-chain --dev

# Check whether the code is compilable
./scripts/docker_run.sh cargo check
Comments
  • Parachain(s) integration question

    Parachain(s) integration question

    This is a question about rmrk-core pallet: How heavily is RMRK frontend code married to calls like mint_nft, send, add_resource, as well as architecture when the nesting is the function of the same pallet as nft logic? For example, could we replace api.tx.rmrkCore.send with api.tx.unqiue.transfer, and api.tx.rmrkCore.addResource with api.tx.nesting.add?

    question 
    opened by gregzaitsev 9
  • Rmrk-Market pallet doesn't compile with `runtime-benchmark` feature

    Rmrk-Market pallet doesn't compile with `runtime-benchmark` feature

    When running:

    cargo test -p pallet-rmrk-market --features runtime-benchmarks
    

    The following error is thrown:

    error[E0432]: unresolved import `pallet_rmrk_core::RmrkBenchmark`
       --> pallets/rmrk-market/src/mock.rs:108:5
        |
    108 | use pallet_rmrk_core::RmrkBenchmark;
        |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `RmrkBenchmark` in the root
    
    error[E0437]: type `Helper` is not a member of trait `pallet_rmrk_core::Config`
       --> pallets/rmrk-market/src/mock.rs:122:2
        |
    122 |     type Helper = RmrkBenchmark;
        |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not a member of trait `pallet_rmrk_core::Config`
    

    Edit: this issue also occurs in the rmrk-equip pallet.

    opened by Szegoo 8
  • Test Coverage

    Test Coverage

    Hi, I am new to open source development and would like to participate. Bruno said I should not mention Jira/Confluence but he recommended me creating an issue here :-)

    With grcov I have created a test coverage html tree --> https://github.com/SiAlDev/rmrk-substrate/tree/main/target/debug/coverage

    I could work on covering all sorts of test scenarios, different data sets etc. I just need a bit guidance to get going.

    Best, Silvio

    opened by SiAlDev 8
  • Lazy ownership tree based on pallet-unique's owner

    Lazy ownership tree based on pallet-unique's owner

    An important feature of RMRK spec is to allow any NFT to own other NFTs. This is called nested NFTs. It makes NFTs composable. Typical use cases are:

    • A set of NFTs can be combined as a single object that can be send and sell in an auction as a whole
    • By following some special rules (defined in BASE), some NFTs can be combined in a meaningful way that produce some special effects. E.g. glasses can be equipped to a Kanaria bird and can be rendered as a complete portrait

    Current approach

    Now the basic NFT is implemented by pallet-unique. On top of it, RMRKCore offers the advanced feature including the nested NFT. The basic logic can be described as below:

    • RMRKCore stores an additional layer of the NFT ownership
      • Let's call it rmrk-owner, in contrast to unique-owner
    • Every RMRK NFT has a rmrk-owner
    • Rmrk-owner can be either AccountId or CollectionIdAndNftTuple (Nft for short)
    • When minted, both rmrk-owner and unique-owner is set to the creator
    • When burned, it recursively burn all the children NFTs
    • When transferred, the destination can be either an AccountId or Nft
      • When transfer to an AccountId, both rmrk-owner and unique-owner are set to the new owner. Then it recursively set unique-owenr of all its children to the new owner.
      • When transfer to a Nft, the rmrk-owner is set to the dest NFT, the unique-owner is set to the unique-owner of the dest NFT. Then it recursively set the unique-owenr of all its children to the the same account.

    This approach is suboptimal in three aspects:

    • rmrk-owner and unique-owner is theoretically redundant -- the former is a superset of the later. It increases the difficulty to maintain both relationship in sync.
    • The burn and transfer operation has to be recursive because we need to proactively update the ownership of the NFTs in unique (unique-owner). It's an overhead.
    • If we allow users the direct access to the underlying pallet-unique (or through xcm), it's likely they can break the sync between unique and RMRK

    So here, we'd prefer an approach that can minimize the redundancy of the data and reduce the complexity and the overhead to maintain the data structure.

    Proposal

    The proposal is simple: use the ownership in pallet-unique to trace the hierarchy of the NFTs.

    The ownership in pallet-unique is represented by AccountId. At the first glance, it doesn't fit our requirement which is to allow a NFT to own other NFTs. However the AccountId isn't necessary to be a real wallet. It's just an address that can be backed by anything. In our case, we can create "virtual account" for each NFT instance by mapping their (CollectionId, NftId) tuple to an AccountId via hashing.

    Once each NFT has a virtual AccountId associated, we can safely establish the hierarchy with pallet-unique's builtin ownership:

    • When minted: just mint a unique NFT
    • When transferred: simply transfer the NFT to the new owner
      • When transfer to an AccountId, just call unique's transfer
      • When transfer to a NFT, generate its corresponding virtual AccountId, and then we do the same transfer as above]
    • When burned: check the NFT has no child, then just burn it

    In this way, each operation is O(1) except the ownership check (discussed below). Even if you do a transfer of a NFT with heavy children, or put it to a sale, only the ownership of the top one needs to be changed.

    The virtual account trick

    This is a common trick widely used in Substrate. The virtual accounts are allocated to on-chain multisig wallets, anonymous proxies, and even some pallets. The benefit is that the AccountId unifies the identity representation for all kinds of entities, It can fit to any place that accepts an account to manage ownership.

    The virtual accounts are usually just hard-coded string or hashes without any private key associated. For example, the AccountId of a multisig wallet can be generated by hash('multisig' ++ encode([owners]). Similarly, we can generate the virtual account id for each NFT like below:

    fn nft_to_account_id<AccountId: Codec>(col: CollectionId, nft: NftId) -> AccountId {
        let preimage = (col, nft).encode();
        let hash = blake2_256(&preimage);
        (b"RmrkNft/", hash)
            .using_encoded(|b| AccountId::decode(&mut TrailingZeroInput::new(b)))
            .expect("Decoding with trailing zero never fails; qed.")
    }
    

    In the above example, "RmrkNft/" takes 8 bytes, and if AccountId is 32 bytes, we got 24 bytes entropy left for the hash of the nft ids.

    Or if we are sure we can get an address space large enough to directly include the ids (e.g. AccountId32 can include CollectionId + Nft Id, which is just 8 bytes), we can just encode the ids to the account id directly. This enables reverse lookup much easier:

    fn nft_to_account_id<AccountId: Codec>(col: CollectionId, nft: NftId) -> AccountId {
        (b"RmrkNft/", col, nft)
            .using_encoded(|b| AccountId::decode(&mut TrailingZeroInput::new(b)))
            .expect("Decoding with trailing zero never fails; qed.")
    }
    
    fn decode_nft_account_id<AccountId: Codec>(account_id: AccountId) -> Option<(CollectionId, NftId)> {
        let (prefix, tuple, suffix) = account_id
            .using_encoded(|mut b| {
                let slice = &mut b;
                let r = <([u8; 8], (CollectionId, NftId))>::decode(slice);
                r.map(|(prefix, tuple)| (prefix, tuple, slice.to_vec()))
            })
            .ok()?;
        // Check prefix and suffix to avoid collision attack
        if &prefix == b"RmrkNft/" && suffix.iter().all(|&x| x == 0) {
            Some(tuple)
        } else {
            None
        }
    }
    

    Ownership check

    A drawback of this approach is that given a NFT in a tree, you cannot easily find if the user is owner if the tree. This can be solved by a reverse lookup:

    • Save a reverse map: the map from the virtual accounts to the NFTs
      • (Actually can be saved if decode_nft_account_id is available)
    • When finding the tree owner, we start from a NFT, check if its owner is in the reverse map
      • If no, this NFT is the root, and its owner is the result
      • If yes, we recursively lookup the parent NFT until we reach to the root

    In this way, a lookup is O(h) where h is the depth of the NFT in the tree. It can be described in pseudocode:

    type AccountPreimage = StorageMap<_, T::AccountId, TwoX64Concate, (CollectionId, NftId)>;
    
    fn lookup_root(col: CollectionId, nft: NftId) -> (T::AccountId, (CollectionId,NftId)) {
        let parent = pallet_uniques::Pallet::<T>::owner(col, nft);
        match AccountPreimage::<T>::get(parent) {
            None => (parent, (col, nft)),
            Some((col, nft)) => lookup_root_owner(col, nft),
        }
    }
    

    Children

    When burning a NFT itself, we need to ensure the is has no child. This can be done by either

    • require no child under it,
    • or automatically transfer its children to the caller before burning,
    • or automatically burn all the children before burning.

    It's easy to recursively burn all the children. To track the children of a NFT, a straightforward solution is to keep a record of all its children, and update the list dynamically when adding or removing a child:

    type Children = StorageMap<(CollectionId, NftId), TwoX64Concat, Vec<(CollectionId, Nft)>, ValueQuery>;
    
    fn add_child(parent: (CollectionId, NftId), child: (CollectionId, NftId)) {
        Children::<T>::mutate(parent, |v| {
            v.push_back(child)
        });
    }
    fn remove_child(parent: (CollectionId, NftId), child: (CollectionId, NftId)) {
        Children::<T>::mutate(parent, |v| {
            *v = v.iter().filter(|nft| nft != child).collect();
        });
    }
    fn has_child(parent: (CollectionId, NftId)) -> bool {
        !Children::<T>::contains_key(parent).empty()
    }
    

    Caveat: Note that this essentially builds an index for the ownership of the nfts. It can only get updated if the change of the ownership was initiated by the RMRK pallet. A bad case is that if a user transfer a NFT to another nft via pallet-uniques directly, the RMRK pallet will know nothing about this ownership change. The children index will be out-of-sync, and therefore when burning the parent nft, some children may be missed.

    A solution is to somehow disallow pallet-uniques to transfer an NFT to another NFT. Or we can add a notification handler to pallet-uniques to call back whenever a transfer was done.

    Pseudocode

    fn mint(origin: OriginFor<T>, owner: T::AccountId, col: CollectionId, nft: NftId) {
        ensure_signed(origin)?;
        pallet_uniques::Pallet::<T>::mint(origin, owner, col, nft)?;
        Ok(())
    }
    
    fn burn(origin: OriginFor<T>, owner: T::AccountId, col: CollectionId, nft: NftId) {
        let who = ensure_signed(origin)?;
        // Check ownership
        let (root_owner, _) = lookup_root_owner(col, nft);
        ensure!(who == root_owner, Error::BadOrigin);
        // Check no children
        ensure!(!has_child((col, nft)), Error::CannotBurnNftWithChild);
        // Burn
        let parent = pallet_uniques::Pallet::<T>::owner(col, nft);
        pallet_uniques::Pallet::<T>::burn(Origin::signed(parent), col, nft);
        // Maintain the children
        if parent != root_owner {
            remove_child(parent, (col, nft));
        }
        Ok(())
    }
    
    fn transfer(origin: OriginFor<T>, col: CollectionId, nft: NftId, dest: AccountIdOrNftTuple) {
        let who = ensure_signed(origin)?;
        // Check ownership
        let (root_owner, root_nft) = lookup_root_owner(col, nft);
        ensure!(who == root_owner, Error::BadOrigin);
        // Prepare transfer
        let dest_account = match dest {
            AccountId(id) => id,
            NftTuple(c, n) => {
                // Circle detection
                let (_, dest_root_nft) = lookup_root_owner(c, n);            
                ensure!(root_nft != dest_root_nft, Error::CircleDetected);
                // Convert to virtual account
                nft_to_account_id(c, n)
            },
        };
        let owner = pallet_uniques::<T>::owner(col, nft);
        pallet_uniques::<T>::transfer(
            Origin::signed(owner),
            col, nft,
            dest_account,
        )?;
        Ok(())
    }
    

    Update notes

    • 2021-12-31
      • Update the pseudocode in the "virtual account" section, also add an alternative encode scheme
      • Update burning and children indexing section according to Bruno's feedback
      • Correct the symbol names of pallet-uniques
      • Fix typos
    opened by h4x3rotab 8
  • Approval mechanism - ink! environment

    Approval mechanism - ink! environment

    RMRK pallet usage from within an ink! environment via chain-extensions requires a Contract AccountId origin. This unfortunately drastically limits the capability of the chain-extension, as the contract can never act on behalf of a user due to the security issues of proxying a User AccountId origin. Original issue discussed

    An ERC* style Approval mechanism could allow contracts to have complete flexibility whilst maintaining user security.

    Some low-level but powerful use cases for approval based actions:

    • Escrow (Contract can send approved NFT(s) on behalf of a user)
    • Multi-collection minters (Multiple contracts can mint from a single collection)
    • Complex Item "destroy" mechanics (Contract can burn NFT(s) on behalf of a user)

    I imagine an approval mechanism has already been considered by the RMRK team, so this post is intended as more of a discussion point as to whether this functionality should, shouldn't (or cannot) exist in the RMRK pallets.

    question 
    opened by boyswan 7
  • allow minting directly to nft

    allow minting directly to nft

    This is basic implementation but some missing pieces

    • [x] need to add tests:
    • [x] mint to nft works,
    • [x] mint to nft fails if nft doesn't exist,
    • [x] mint to non-owned nft results in pending state
    • [x] other tests?
    • [x] currently i think minting to a non-owned nft with resources will leave each resource in pending state, which is likely undesirable. we want the resources accepted on minted adds. two possible solutions: (1) implement an auto-accept right after the resource add inside of the mint [con: bunch of events, maybe more weight], (2) add an during_mint parameter to resource_add, which would override the requirement for sender to equal owner. i'd vote for option 2.
    opened by bmacer 7
  • NFT Lock Mechanism

    NFT Lock Mechanism

    Targets

    • [x] Temporarily add Uniques pallet
    • [x] Add a dependency injection Trait to Uniques pallet for RMRK Core pallet to implement
    • [x] Implement Lock logic in RMRK Core
    • [x] Add tests to ensure Locks operate correctly
    • [x] Rebase changes with main branch
    • [x] Submit PR for Uniques pallet in Substrate repo to add the dependency injection Trait
    • [x] After PR accepted remove Uniques pallet from RMRK repo
    • [x] Make PR suggested changes
    • [x] Update Cargo to polkadot-v0.9.22 when it is released with Locker Trait

    Procedure

    • Temporarily add a local copy of the Uniques pallet to the repo
    • Implement the dependency injection Trait for the Lock and implement a function to default to false
    • Implement the new Lock Trait in RMRK Core and add logic to configure the Lock status as a bool
    • Add error checks and tests to ensure a NFT has the following limitations:
      • no adding children
      • no changing priority
      • no adding resources
      • no equip/unequip
      • no mutating attributes or properties
    • Psuedo code ex: https://gist.github.com/h4x3rotab/152d05c9b5a18462e8dfe2b299b58db8

    RMRK Spec

    opened by HashWarlock 7
  • Additional parameters for burn_nft

    Additional parameters for burn_nft

    It seems that the current weight of burn_nft is too small because it is a recursive transaction.

    A benchmark should be written to generate the appropriate weight for the transaction. Despite it can be written later, there is an architectural question -- what signature should the burn_nft have?

    I believe it is essential to have additional parameters like max_recursions to actually be able to write a correct benchmark. For burn_nft, it would be needed to have even 2 additional parameters (like max_burns and max_breadth) because the weight of actually burning an NFT differs from the weight of traversing an NFT bundle during the burn transaction.

    Also, it would be good to use only one argument of some struct type. Here is why:

    • You can't mess up the order of the limit arguments (like max_burns and max_breadth)
    • You can easily add something to it
    • You can make optional arguments with default values
    opened by mrshiposha 6
  • Add Resource During Mint Process

    Add Resource During Mint Process

    Currently we have to add the Resource after Minting a NFT in Phala World which requires the owner to accept_resource to get a resource for the NFT. I would suggest to add another parameter for a Resource to be added during the minting process to avoid adding a Resource & requiring the new owner to accept the NFT.

    enhancement 
    opened by HashWarlock 6
  • Bug/93 update vec to bounded vec

    Bug/93 update vec to bounded vec

    addresses #93. We may be able to alias some of the generics to make it a bit more readable, but this should pass tests for all of equip, market and core, and should successfully remove #[pallet::without_storage_info]

    opened by bmacer 6
  • Question: Adding rmrk-core as an external dependency

    Question: Adding rmrk-core as an external dependency

    I'm starting on rust & susbtrate development and I'm interested in adding the rmrk-core pallet as an external dependency to a fresh susbtrate-node-template project, but I'm getting a bunch of errors. This is what I've tried:

    • Adding the pallet on runtime/Cargo.toml:
    
    
    [dependencies.pallet-rmrk-core]
    default_features = false
    git = 'https://github.com/rmrk-team/rmrk-substrate.git'
    branch = 'main'
    version = "0.0.1"
    
    ...
    
    [features]
    default = ["std"]
    std = [
    	#... Template dependencies
    	"pallet-rmrk-core/std",
    	
    ]
    
    • Configure the pallets on runtime/src/lib.rs
    
    parameter_types! {
    	pub const MaxRecursions: u32 = 10;
    }
    
    impl pallet_rmrk_core::Config for Runtime {
    	type Event = Event;
    	type ProtocolOrigin = EnsureRoot<AccountId>;
    	type MaxRecursions = MaxRecursions;
    }
    
    

    And adding them on the construct_runtime! macro:

    construct_runtime!(
    	pub enum Runtime where
    		Block = Block,
    		NodeBlock = opaque::Block,
    		UncheckedExtrinsic = UncheckedExtrinsic
    	{
    		/*--- snip ---*/
    		RmrkCore: pallet_rmrk_core::{Pallet, Call, Event<T>, Storage},
    	}
    );
    

    Executing cargo check on the terminal generates several (164 at the time) errors, but here's some that seem interesting:

    error: duplicate lang item in crate `sp_io` (which `frame_support` depends on): `panic_impl`.
        |
        = note: the lang item is first defined in crate `sp_io` (which `frame_support` depends on)
        = note: first definition in `sp_io` loaded from ./substrate-node-template/target/debug/wbuild/node-template-runtime/target/wasm32-unknown-unknown/release/deps/libsp_io-d578f12f545fdf6e.rmeta
        = note: second definition in `sp_io` loaded from ./substrate-node-template/target/debug/wbuild/node-template-runtime/target/wasm32-unknown-unknown/release/deps/libsp_io-2c478c57ead6a958.rmeta
    
    error[E0277]: the trait bound `Runtime: frame_system::pallet::Config` is not satisfied
         --> ./substrate-node-template/runtime/src/lib.rs:287:6
          |
      287 | impl pallet_uniques::Config for Runtime {
          |      ^^^^^^^^^^^^^^^^^^^^^^ the trait `frame_system::pallet::Config` is not implemented for `Runtime`
          |
      note: required by a bound in `pallet_uniques::Config`
         --> ./.cargo/git/checkouts/substrate-7e08433d4c370a21/1ca6b68/frame/uniques/src/lib.rs:69:37
          |
      69  |     pub trait Config<I: 'static = ()>: frame_system::Config {
          |                                        ^^^^^^^^^^^^^^^^^^^^ required by this bound in `pallet_uniques::Config`
    
      error[E0277]: the trait bound `Runtime: frame_system::pallet::Config` is not satisfied
         --> ./substrate-node-template/runtime/src/lib.rs:307:6
          |
      307 | impl pallet_rmrk_core::Config for Runtime {
          |      ^^^^^^^^^^^^^^^^^^^^^^^^ the trait `frame_system::pallet::Config` is not implemented for `Runtime`
          |
      note: required by a bound in `pallet_rmrk_core::Config`
         --> ./.cargo/git/checkouts/rmrk-substrate-4fe3068c99fc9e3a/b47d311/pallets/rmrk-core/src/lib.rs:53:20
          |
      53  |     pub trait Config: frame_system::Config + pallet_uniques::Config {
          |                       ^^^^^^^^^^^^^^^^^^^^ required by this bound in `pallet_rmrk_core::Config`
    
    

    Am I doing something wrong when importing the pallet? I'll be happy to provide more information if required. Thanks in advance.

    question 
    opened by amatsonkali 6
  • Add TransferHooks, Budget fix and upgrade to Polkadot v0.9.34

    Add TransferHooks, Budget fix and upgrade to Polkadot v0.9.34

    Description

    TransferHooks implementation

    For implementations downstream of the RMRK pallet there needs to be an ability to add custom logic in the case that there are multiple pallets using the RMRK NFT pallet, but want to keep the business logic mutually exclusive. This implementation allows for a trait called TransferHooks to be implemented with 2 functions pre_check and post_transfer that default to true when not implemented.

    Budget trait fix

    There are 2 problems with the current Budget trait:

    NFTs are currently allowed to mint and send NFTs to a parent NFT that can meet up to the threshold of NestingBudget::get() + 1. This allows for NFTs queries like lookup_root_owner to return errors like TooManyRecursions. Having this error is okay, but the NFTs should be prevented from being minted to or sent to a parent NFT if the Budget threshold is met.

    Currently the code will do a calculation on functions like burn_nft and this will perform an arithmetic subtraction at the end of the function, but this should be added as a field in the Value type & implement a function from Budget to get the budget that has been consumed and budget that is left.

    Targets

    • [x] Upgrade to polkadot-v0.9.34
    • [x] implement TransferHooks trait with pre_check and post_transfer functions
    • [x] Update integration tests
    • [x] Fix minting/sending NFTs that exceed the threshold of NestingBudget::get() value.
    • [x] Add fields names budget_consumed and budget_left` to track & through errors is an overflow is detected.
    • [x] Add functions in Budget trait to get the 2 new fields.
    • [x] Fix benchmarking errors from Traits being renamed
    opened by HashWarlock 0
  • Have a way to bulk remove properties

    Have a way to bulk remove properties

    In almost all the cases, the properties should be removed when the NFT is burnt. Now we only have this, but it's less efficient compared with draining an iterator:

    https://github.com/rmrk-team/rmrk-substrate/blob/f2191fda8c6e7e79c154fa46791e4c1f37e8d86e/pallets/rmrk-core/src/functions.rs#L95-L106

    opened by h4x3rotab 0
  • Burn NFTs causes panic

    Burn NFTs causes panic

    Description

    If the NestingBudget is set too low & there is a call to burn_nft, the function will panic! instead of throw an Error.

    image

    https://github.com/rmrk-team/rmrk-substrate/blob/c65927c8440c53f17e4bc55a5f28289d379f41aa/pallets/rmrk-core/src/functions.rs#L591-L594

    Is there a reason we are subtracting from the Budget after the Storage mutation & event emitted? Also, we should use a safer operation to prevent the overflow like saturating_sub.

    opened by HashWarlock 3
  • A need to retrieve all NFTs under a certain address through one query

    A need to retrieve all NFTs under a certain address through one query

    if we need to get all nfts owned by one user, we need to run several calls to find out all.

    1. Call api.query.rmrkCore.collections.entries to get all existing collections.
    2. Call api.query.uniques.account.entries(accuntAddress, collectionId) to get all nft ID owned by one users inside specified collection.
    3. Call api.query.rmrkCore.nfts.entries(collectionId, nftId) to fetch core metadata for one nft.
    4. Call api.query.rmrkCore.properties.entries(collectionId, nftId) to fetch custom properties for one nft.

    So when we need get all NFTs belongs to one address and retrieve metadata for these NFTs, we need wait for long time to process it out. If need retrieve all those information essentially, we need some pre-build index service (like SubSquid or SubQuery). PW's API don't provides such capacity now, unfortunately.

    opened by doyleguo 1
  • Allow to configure who can mint nft

    Allow to configure who can mint nft

    Summary

    In many of the network there will be the need to control who exactly can mint the nft. Currently, anyone can do so. It might be more convinent to allow the runtime to configure this.

    Proposed changes

    Current

    Inside calls like mint_nft_directly_to_nft, create_collection and so we have

    ensure_signed(origin)?;
    

    Proposed

    Change to custom origin verifier

    T::ProducerOrigin::ensure_origin(origin)?;
    

    where T::ProducerOrigin will be

    Config: frame_system::Config {
     // -- snip
     type ProducerOrigin: EnsureOrigin<Self::Origin, Success = Self::AccountId>;
    }
    

    If so, I am willing to work on this.

    opened by sudipghimire533 2
Releases(0.2.0)
  • 0.2.0(Nov 23, 2022)

    • Benchmarking RMRK-core #219
    • Safety pass: avoid usage of unwrap #232
    • Equippables addition/removal feature #234
    • Improve nesting / bound recursive calls via budgeting #235
    • Add replace resource #236
    • Update equipped field in NftInfo #238
    • Benchmark burn_nft call with deep nesting #240
    • fix PhantomType TypeBuilder build error #249
    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Oct 18, 2022)

    • Fix resource removal. PR https://github.com/rmrk-team/rmrk-substrate/pull/216
    • Fix Send/Accept NFT. PR https://github.com/rmrk-team/rmrk-substrate/pull/230
    • Fix update Children storage when sending NFT directly to NFT. PR https://github.com/rmrk-team/rmrk-substrate/pull/228
    • Prevent frozen NFTs from being listed. PR https://github.com/rmrk-team/rmrk-substrate/pull/226
    • Fix allow minting after burn. PR https://github.com/rmrk-team/rmrk-substrate/pull/215
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Sep 29, 2022)

    • remove unnecessary check in list ( fixes https://github.com/rmrk-team/rmrk-substrate/issues/186 ). PR https://github.com/rmrk-team/rmrk-substrate/pull/222
    • Upgrade to Polkadot-v0.9.29. PR https://github.com/rmrk-team/rmrk-substrate/pull/225
    • Removing compile warnings. PR https://github.com/rmrk-team/rmrk-substrate/pull/218
    Source code(tar.gz)
    Source code(zip)
  • 0.0.1(Sep 21, 2022)

Owner
RMRK Team
RMRK.app team
RMRK Team
A wallet library for Elements / Liquid written in Rust!

EDK Elements Dev Kit A modern, lightweight, descriptor-based wallet library for Elements / Liquid written in Rust! Inspired by BDK for Elements & Liqu

luca vaccaro 11 Dec 11, 2021
Marinde Anchor-Based, first on mainnet, liquid-staking-program and mSOL->SOL swap pool

marinade-anchor Marinade-finance liquid staking program for the Solana blockchain Audits & Code Review Kudelski Security: https://marinade.finance/Kud

Marinade.Finance 42 Dec 11, 2022
Terra liquid staking derivative

Terra liquid staking derivative. Of the community, by the community, for the community.

null 36 Dec 6, 2022
Create Bitcoin double-spend discouraging bonds on Liquid.

doubletake A tool for creating Bitcoin double-spend punishment bonds on Liquid. WARNING: Don't use this tool for real use cases yet. There are still a

Steven Roose 5 Aug 12, 2023
🎙️ Catalyst Voices provides a unified experience and platform including production-ready liquid democracy

??️ Catalyst Voices provides a unified experience and platform including production-ready liquid democracy, meaningful collaboration opportunities & data-driven context for better onboarding & decisions.

Input Output 6 Oct 11, 2023
A solana program designed to mint Metaplex compliant NFTs.

Solana Minter My program used to mint Amoebits & Amoebit Minis. I wrote it from scratch using the hello-world program as an example & base. Features C

vlawmz 35 Sep 22, 2022
A reference NFT Staking Program & Client for Solana NFTs

This is a reference NFT Staking Program & Client for Solana NFTs. This program is compatible with both Candy Machine v1 and Candy Machine v2 NFTs.

Tracy Adams 73 Dec 19, 2022
Galleries of NFTs using Solana and Rust for contracts

About this Package created to simplify the process of parsing NFTs on Solana. It consists of: Package basic things like fetch all NFTs for specific wa

Andrew Scott 1 Jan 28, 2022
A specialized blockchain for testing use cases with the FRAME NFTs Pallet.

Substrate NFTs Node The Substrate NFTs node is a specialized blockchain for testing use cases with the FRAME NFTs Pallet. ?? The purpose of this node

Sacha Lansky 4 May 25, 2023
EVM compatible chain with NPoS/PoC consensus

Reef Chain Reef chain is written in Rust. A basic familiarity with Rust tooling is required. To learn more about Reef chain, please refer to Documenta

Reef Finance 148 Dec 31, 2022
Trustworthy encrypted command line authenticator app compatible with multiple backups.

cotp - command line totp authenticator I believe that security is of paramount importance, especially in this digital world. I created cotp because I

Reply 71 Dec 30, 2022
Decrypt your LUKS partition using a FIDO2 compatible authenticator

fido2luks This will allow you to unlock your LUKS encrypted disk with an FIDO2 compatible key. Note: This has only been tested under Fedora 31, Ubuntu

null 118 Dec 24, 2022
An Ethereum compatible Substrate blockchain for bounties and governance for the Devcash community.

Substrate Node Template A fresh FRAME-based Substrate node, ready for hacking ?? Getting Started Follow the steps below to get started with the Node T

null 4 Mar 30, 2022
Fiddi is a command line tool that does the boring and complex process of checking and processing/watching transactions on EVM compatible Blockchain.

Fiddi is a command line tool that does the boring and complex process of checking and processing/watching transactions on EVM compatible Blockchain.

Ahmad Abdullahi Adamu 7 Jan 9, 2023
Selendra is a multichains interoperable nominated Proof-of-Stake network for developing and running Substrate-based and EVM compatible blockchain applications.

Selendra An interoperable nominated Proof-of-Stake network for developing and running Substrate-based and EVM compatible blockchain applications. Read

Selendra 16 Nov 29, 2022
Minimalistic EVM-compatible chain indexer.

EVM Indexer Minimalistic EVM-compatible blockchain indexer written in rust. This repository contains a program to index helpful information from any E

Kike B 14 Dec 24, 2022
Minimalistic EVM-compatible chain indexer.

EVM Indexer Minimalistic EVM-compatible blockchain indexer written in rust. This repository contains a program to index helpful information from any E

LlamaFolio 11 Dec 15, 2022
Reference library that implements all the necessary functionality for developing a client that is compatible with TAPLE's DLT network.

⚠️ TAPLE is in early development and should not be used in production ⚠️ TAPLE Core TAPLE (pronounced T+ ?? ['tapəl]) stands for Tracking (Autonomous)

Open Canarias 6 Jan 25, 2023
Write Anchor-compatible Solana programs in TypeScript

Axolotl Write Achor-compatible Solana programs using TypeScript. Writing Rust is hard, but safe. It's also the go-to language for writing Solana progr

Anthony Morris 17 Nov 27, 2022