NFT Market Reference Implementation
A PoC backbone for NFT Marketplaces on NEAR Protocol.
Changelog
Progress:
- basic purchase of NFT with FT
- demo pay out royalties (FTs and NEAR)
- test and determine standards for markets (best practice?) to buy/sell NFTs (finish standard) with FTs (already standard)
- demo some basic auction types, secondary markets and
- frontend example
- first pass / internal audit
- connect with bridged tokens e.g. buy and sell with wETH/nDAI (or whatever we call these)
Known Issues / Implementation Details for Markets
- approve NFT on marketplace A and B
- it sells on B
- still listed on A
- user Alice goes to purchase on marketplace A but this will fail
- the token has been transferred already and marketplace A has an incorrect approval ID for this NFT
There are 3 potential solutions:
- handle in market contract - When it fails because nft_transfer fails, marketplace could make promise call that checks nft_token owner_id is still sale owner_id and remove sale. This means it will still fail for 1 user.
- handle with backend - run a cron job to check sales on a regular interval. This potentially avoids failing for any user.
- remove from frontend (use frontend or backend) - for every sale, check that sale.owner_id == nft.owner_id and then hide these sale options in the frontend UI. Heavy processing for client side, so still needs a backend.
- let it fail client side then alert backend to remove sale. No cron. Still fails for 1 user.
Matt's opinion: Option (2/3) is the best UX and also allows your sale listings to be the most accurate and up to date. If you're implementing a marketplace, you most likely are running backend somewhere with the marketplace owner account. If you go with Option 3 you can simply update a list of "invalid sales" and filter these before you send the sales listings to the client. If you decided to go with 2, modify the marketplace remove_sale to allow your marketplace owner account to remove any sales.
Notes:
High level diagram of NFT sale on Market using Fungible Token:
Remove the FT steps for NEAR transfers (but nft_transfer_payout and resolve_purchase still the same).
Differences from nft-simple
NFT standard reference implementation:
- anyone can mint an NFT
- Optional token_type
- capped supply by token_type
- lock transfers by token_token
- enumerable.rs
Working
Frontend App Demo: /test/app.test.js/
- install, deploy, test
yarn && yarn test:deploy
- run app -
yarn start
App Tests: /test/app.test.js/
- install, deploy, test
yarn && yarn test:deploy
- if you update contracts -
yarn test:deploy
- if you update tests only -
yarn test
NFT Specific Notes
Associated Video Demos (most recent at top)
Older Walkthrough Videos:
Some additional ideas around user onboarding:
Detailed Installation / Quickstart
If you don't have Rust
Install Rust https://rustup.rs/
If you have never used near-cli
- Install near-cli:
npm i -g near-cli
- Create testnet account: Wallet
- Login:
near login
Installing and Running Tests for this Example
- Install everything:
yarn && (cd server && yarn)
- Deploy the contract and run the app tests:
yarn test:deploy
- (WIP) Start server and run server tests:
cd server && yarn start
then in another terminal from the rootyarn test:server
Notes
- If you ONLY change the JS tests use
yarn test
. - If you change the contract run
yarn test:deploy
again. - If you run out of funds in the dev account run
yarn test:deploy
again. - If you change the dev account (yarn test:deploy) the server should restart automatically, but you may need to restart the app and sign out/in again with NEAR Wallet.
Moar Context
There's 3 main areas to explore:
- frontend app - shows how to create guest accounts that are added to the app contract via the nodejs server. Guests can mind NFTs, put them up for sale and earn NEAR tokens. When the guest has NEAR they can upgrade their account to a full account.
- app.test.js (demos frontend only tests)
Owner Account, Token Account, etc...
The tests are set up to auto generate the dev account each time you run test:deploy
e.g. you will get a new NFT contract address each time you run a test.
This is just for testing. You can obviously deploy a token to a fixed address on testnet / mainnet, it's an easy config update.
Guests Account (key and tx gas sponsorship)
When you run app / server tests. There's a contract deployed and a special account created guests.OWNER_ACCOUNT_ID
to manage the sponsored users (the ones you will pay for gas fees while onboarding). This special "guests" account is different from the test guest account bob.TOKEN_ID.OWNER_ACCOUNT_ID
. It is an account, different from the owner or token accounts, that manages the guests keys.
Guest Accounts
The guest users can claim_drop, ft_transfer_guest
and receive tokens from other users, e.g. in the server tests the owner transfers tokens to the guest account via API call and using client side code.
Then, following the server tests, the guest transfers tokens to alice (who is a real NEAR account e.g. she pays her own gas).
Finally, the guest upgrades themselves to a real NEAR account, something demoed in the video.
It's a lot to digest but if you focus on the /test/app.test.js
you will start to see the patterns.
Background
One of the issues with onboarding new users to crypto is that they need to have crypto to do anything e.g. mint an NFT. A creator, artist or community might want to drop a bunch of free minting options to their fans for them to mint user generated content, but the audience has (1) no crypto to pay for fees (2) no wallet (3) no concept of crypto or blockchain; prior to the drop.
So let's solve these issues by allowing users to generate content the traditional Web2 way!
We do a demo of creating a "guest" named account for an app where the gas fees are sponsored by a special app account called "guests.APP_NAME.near". The guest account doesn't exist (sometimes called a virtual or contract account) until the user creates and sells and NFT that generates some NEAR tokens and then they can upgrade to a real account. Until then their name is reserved because only the app is able to create "USERNAME.APP_NAME.near".
This has many advantages for user onboarding, where users can use the app immediately and later can be upgraded to a full account. The users also don't have to move any assets - namely the fungible tokens they earned as a guest user.
Installation
Beyond having npm and node (latest versions), you should have Rust installed. I recommend nightly because living on the edge is fun.
Don't forget to install the wasm32 target:
rustup target add wasm32-unknown-unknown
Also recommend installing near-cli globally
npm i -g near-cli
Everything else can be installed via: yarn
cd server && yarn
NEAR Config
There is only one config.js file found in src/config.js
, this is also used for running tests.
Using src/config.js
you can set up your different environments. Use REACT_APP_ENV
to switch environments e.g. in package.json
script deploy
.
Running Tests
You can run unit tests in the Rust contracts themselves, but it may be more useful to JS tests against testnet itself.
Note: to run the app and server tests make sure you install and start the server.
- cd server
- yarn && yarn start
Commands:
test
will simply run app tests against the contract already deployed. You can mess around withapp.test.js
and try different frontend stufftest:deploy
- will deploy a new dev account (/neardev
) and deploy a new contract to this account, then runtest
test:server
- will test the server, make sure you start it (see "Note" above)test:unit
- runs the rust unit tests
If you've changed your contract or your dev account has run out of funds use test:deploy
, if you're updating your JS tests only then use test
.
Test Utils
There are helpers in test/test-utils.js
that take care of:
- creating a near connection and establishing a keystore for the dev account
- creating test accounts each time a test is run
- establishing a contract instance so you can call methods
You can change the default funding amount for test accounts in src/config.js
Using the NEAR Config in your app
In src/state/near.js
you will see that src/config.js
is loaded as a function. This is to satisfy the jest/node test runner.
You can destructure any properies of the config easily in any module you import it in like this:
// example file app.js
import getConfig from '../config';
export const {
GAS,
networkId, nodeUrl, walletUrl, nameSuffix,
contractName,
} = getConfig();
Note the export const in the destructuring?
Now you can import these like so:
//example file Component.js
import { GAS } from '../app.js'
...
await contract.withdraw({ amount: parseNearAmount('1') }, GAS)
...
React 17, Parcel with useContext and useReducer
- Bundled with Parcel 2.0 (@next) && eslint
- Minimal all-in-one state management with async/await support
Getting Started: State Store & useContext
The following steps describe how to use
src/utils/state
to create and use your ownstore
andStateProvider
.
- Create a file e.g.
/state/app.js
and add the following code
import { State } from '../utils/state';
// example
const initialState = {
app: {
mounted: false
}
};
export const { store, Provider } = State(initialState);
- Now in your
index.js
wrap yourApp
component with theStateProvider
import { Provider } from './state/app';
ReactDOM.render(
<Provider>
<App />
</Provider>,
document.getElementById('root')
);
- Finally in
App.js
you canuseContext(store)
const { state, dispatch, update } = useContext(store);
Usage in Components
Print out state values
<p>Hello {state.foo && state.foo.bar.hello}</p>
Update state directly in component functions
const handleClick = () => {
update('clicked', !state.clicked);
};
Dispatch a state update function (action listener)
const onMount = () => {
dispatch(onAppMount('world'));
};
useEffect(onMount, []);
Dispatched Functions with context (update, getState, dispatch)
When a function is called using dispatch, it expects arguments passed in to the outer function and the inner function returned to be async with the following json args: { update, getState, dispatch }
Example of a call:
dispatch(onAppMount('world'));
All dispatched methods and update calls are async and can be awaited. It also doesn't matter what file/module the functions are in, since the json args provide all the context needed for updates to state.
For example:
import { helloWorld } from './hello';
export const onAppMount = (message) => async ({ update, getState, dispatch }) => {
update('app', { mounted: true });
update('clicked', false);
update('data', { mounted: true });
await update('', { data: { mounted: false } });
console.log('getState', getState());
update('foo.bar', { hello: true });
update('foo.bar', { hello: false, goodbye: true });
update('foo', { bar: { hello: true, goodbye: false } });
update('foo.bar.goodbye', true);
await new Promise((resolve) => setTimeout(() => {
console.log('getState', getState());
resolve();
}, 2000));
dispatch(helloWorld(message));
};
Prefixing store and Provider
The default names the State
factory method returns are store
and Provider
. However, if you want multiple stores and provider contexts you can pass an additional prefix
argument to disambiguate.
export const { appStore, AppProvider } = State(initialState, 'app');
Performance and memo
The updating of a single store, even several levels down, is quite quick. If you're worried about components re-rendering, use memo
:
import React, { memo } from 'react';
const HelloMessage = memo(({ message }) => {
console.log('rendered message');
return <p>Hello { message }</p>;
});
export default HelloMessage;
Higher up the component hierarchy you might have:
clicked: {JSON.stringify(state.clicked)}
const App = () => { const { state, dispatch, update } = useContext(appStore); ... const handleClick = () => { update('clicked', !state.clicked); }; return ( <div className="root"> <HelloMessage message={state.foo && state.foo.bar.hello} /> <p>clicked: {JSON.stringify(state.clicked)}</p> <button onClick={handleClick}>Click Me</button> </div> ); };
When the button is clicked, the component HelloMessage will not re-render, it's value has been memoized (cached). Using this method you can easily prevent performance intensive state updates in further down components until they are neccessary.
Reference: