A rust binding for nodejs to generate md5 hash value

Related tags

Cryptography hasher
Overview

Hasher

A rust binding for creating node module to generate md5 hash value

This project was bootstrapped by create-neon.

Installing hasher

Installing hasher requires a supported version of Node and Rust.

You can install the project with npm. In the project directory, run:

$ npm install

This fully installs the project, including installing any dependencies and running the build.

Building hasher

If you have already installed the project and only want to run the build, run:

$ npm run build

This command uses the cargo-cp-artifact utility to run the Rust build and copy the built library into ./index.node.

Exploring hasher

After building hasher, you can explore its exports at the Node REPL:

$ npm install
$ node
> require('.').hash("hello world")
"5eb63bbbe01eeed093cb22bb8f5acdc3"

Available Scripts

In the project directory, you can run:

npm install

Installs the project, including running npm run build.

npm build

Builds the Node addon (index.node) from source.

Additional cargo build arguments may be passed to npm build and npm build-* commands. For example, to enable a cargo feature:

npm run build -- --feature=beetle

npm build-debug

Alias for npm build.

npm build-release

Same as npm build but, builds the module with the release profile. Release builds will compile slower, but run faster.

npm test

Runs the unit tests by calling cargo test. You can learn more about adding tests to your Rust code from the Rust book.

Project Layout

The directory structure of this project is:

hasher/
├── Cargo.toml
├── README.md
├── index.node
├── package.json
├── src/
|   └── lib.rs
└── target/

Cargo.toml

The Cargo manifest file, which informs the cargo command.

README.md

This file.

index.node

The Node addon—i.e., a binary Node module—generated by building the project. This is the main module for this package, as dictated by the "main" key in package.json.

Under the hood, a Node addon is a dynamically-linked shared object. The "build" script produces this file by copying it from within the target/ directory, which is where the Rust build produces the shared object.

package.json

The npm manifest file, which informs the npm command.

src/

The directory tree containing the Rust source code for the project.

src/lib.rs

The Rust library's main module.

target/

Binary artifacts generated by the Rust build.

Learn More

To learn more about Neon, see the Neon documentation.

To learn more about Rust, see the Rust documentation.

To learn more about Node, see the Node documentation.

Comments
  •  [WIP] small tentative for EMD 1D in torch

    [WIP] small tentative for EMD 1D in torch

    I saw the torch branch for LP stuff. Would you be interested in my implementation for the 1d EMD (and the sliced wasserstein with it)?

    I'm not a huge fan of Pytorch so I can't vouch that what I'm doing here is the best implementation, but it feels to me like it should be fairly ok for batched inputs which is what you want for slice stuff anyway.

    opened by AdrienCorenflos 47
  • GPU changes:

    GPU changes:

    • Replace cudamat by cupy for GPU implementations (cupy is still in active development, while cudamat is not)
    • Use the new DA class instead of the old deprecated one

    TODO for another PR:

    • Performances are still a bit lower than with cudamat (even if better than CPU for large matrices). Some speedups should be possible by tweaking the code
    opened by toto6 38
  • Domain adaptation Classes

    Domain adaptation Classes

    • first proposal of DA class structure
    • BaseEstimator: OTDA wrapper (does not work as a stand-alone but implements the methods common to any OTDA algorithm)
    • SinkhornTransport: implements Sinkhorn algorithm for OTDA
    • try doc strings compliant with numpy requirements
    opened by Slasnista 29
  • Domain adaptation Classes

    Domain adaptation Classes

    We should change the domain adaptation Classes to be more sklearn compliant.

    Main issues:

    • Use CamelCase for classes
    • Use init for setting parameters and instead of fit.

    @agramfort proposed to Creat new Clases with proper names and begin deprecating the old classes.

    I think it is a good move.

    enhancement 
    opened by rflamary 28
  • Not in simplex -- two sets of largely different sizes

    Not in simplex -- two sets of largely different sizes

    I am trying to calculate the EMD of two sets. When one set has a few hundred entries and the other has only 2, the EMD calculation fails and returns Problem Infeasible.

    Steps to reproduce the behavior: ** SEE BELOW COMMENT FOR FIXED SCRIPT **

    Expected behavior Should return EMD around 1, instead says that the sets spherEng1 and pencilEnergy are not in the simplex

    Screenshots Here is comparing the EMDs calculated for less densely tiled to most densely tiled (number of particles = number of segments) with the two element set image

    Desktop (please complete the following information):

    • OS: [MacOSX]
    • Python version [3.6]
    • POT installed with pip

    import platform; print(platform.platform()) Darwin-16.7.0-x86_64-i386-64bit import sys; print("Python", sys.version) ('Python', '2.7.15 |Anaconda, Inc.| (default, Dec 14 2018, 13:10:39) \n[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]') import numpy; print("NumPy", numpy.version) ('NumPy', '1.15.4') import scipy; print("SciPy", scipy.version) ('SciPy', '1.1.0') import ot; print("POT", ot.version) ('POT', '0.5.1')

    opened by caricesarotti 25
  • [MRG] Sliced wasserstein

    [MRG] Sliced wasserstein

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [X] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and context / Related issue

    Implement SWD: https://github.com/PythonOT/POT/issues/202

    How has this been tested (if it applies)

    Added specific tests (positive definiteness + matching the EMD in the 1D case)

    Checklist

    • [X] The documentation is up-to-date with the changes I made.
    • [X] I have read the CONTRIBUTING document.
    • [x] All tests passed, and additional code has been covered with new tests.

    Not sure why yet but the stuff doesn't build.

    I'm publishing this as a draft as I have some other changes in my branch that are pending for another merge (cf this: https://github.com/PythonOT/POT/issues/200)

    opened by AdrienCorenflos 19
  • [MRG] Improved docs and changed scipy version

    [MRG] Improved docs and changed scipy version

    I changed the scipy version requirements since version scipy 1.2.1 made my POT crash (cannot remember on which call, sorry, it happened while building the docs) and the issue was fixed when upgrading to scipy 1.3

    Apart from that, the main goal of this PR is to homogenize a bit the presentation in the docs.

    opened by rtavenar 15
  • fail when using

    fail when using "pip install POT"

    When I use "pip install POT", it failed. It depended on Cython. However, it seems that it forgets to tell pip that it depends on Cython.

    I solve this problem by install Cython first. However, if we write both Cython and POT into requirements.txt, the installation will fail.

    Could anyone solve that?

    documentation 
    opened by Adoni 15
  • [MRG] Add Unbalanced KL Wasserstein distance + barycenter

    [MRG] Add Unbalanced KL Wasserstein distance + barycenter

    new unbalanced module for UOT with KL relaxation with the funcs:

    • sinkhorn_unbalanced: generalized Sinkhorn to compute W

    • barycenter_unbalanced: unbalanced Wasserstein barycenter

    • Tests of convergence for both algorithms

    • Examples plot_UOT_1D and plot_UOT_barycenter_1D with unbalanced gaussian distributions.

    new feature 
    opened by hichamjanati 14
  • Pytest with 89% coverage

    Pytest with 89% coverage

    • Add numerous test for existing functions and classes.
    • Correct failing build due to Python3/2 map function difference.

    Will merge soon since currently POT do not build.

    opened by rflamary 11
  • nan values and division by zero warnings on stochastic solvers

    nan values and division by zero warnings on stochastic solvers

    Description

    The following output showed up when playing with the codes on the docs of the stochastic sub-module. I think the output of the following code snippet is clear enough to show where the problem is but I didn't want to put a PR request directly since I am really new to these topics.

    To Reproduce

    Below is the same code samples from the docs except n_source here is significantly higher.

    import cupy as cp
    import ot
    n_source = 70000
    n_target = 100
    reg = 1
    numItermax = 100000
    lr = 0.1
    batch_size = 3
    log = True
    
    a = ot.utils.unif(n_source)
    b = ot.utils.unif(n_target)
    
    rng = np.random.RandomState(0)
    X_source = rng.randn(n_source, 2)
    Y_target = rng.randn(n_target, 2)
    M = ot.dist(X_source, Y_target)
    
    method = "ASGD"
    asgd_pi, log_asgd = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method, numItermax, log=log)
    print(log_asgd['alpha'], log_asgd['beta'])
    print(asgd_pi)
    
    /home/selman/anaconda3/envs/sd/lib/python3.9/site-packages/ot/stochastic.py:85: RuntimeWarning: overflow encountered in exp
      exp_beta = np.exp(-r / reg) * b
    /home/selman/anaconda3/envs/sd/lib/python3.9/site-packages/ot/stochastic.py:86: RuntimeWarning: invalid value encountered in true_divide
      khi = exp_beta / (np.sum(exp_beta))
    [nan nan nan ... nan nan nan] [nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
     nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
     nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
     nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
     nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
     nan nan nan nan nan nan nan nan nan nan]
    [[nan nan nan ... nan nan nan]
     [nan nan nan ... nan nan nan]
     [nan nan nan ... nan nan nan]
     ...
     [nan nan nan ... nan nan nan]
     [nan nan nan ... nan nan nan]
     [nan nan nan ... nan nan nan]]
    

    Additional Context

    Right now I use stochastic.py file on my own project seperately because of this problem. I added a small value on the divisions and it seems to work fine but I am not sure if it is an appropiate aproach. For example:

    khi = exp_beta / (np.sum(exp_beta) + 1e-8)
    

    Environment:

    Linux-5.10.42-1-MANJARO-x86_64-with-glibc2.33
    Python 3.9.5 (default, Jun  4 2021, 12:28:51) 
    [GCC 7.5.0]
    NumPy 1.20.2
    SciPy 1.6.2
    POT 0.7.0
    
    bug help wanted 
    opened by syelman 3
  • dim>1 for ot.bregman.barycenter

    dim>1 for ot.bregman.barycenter

    🚀 Feature

    Extend ot.bregman.barycenter to higher dimensions (>1D data)

    Motivation

    Computing barycenters for color images requires at least 3 dimensions (rgb or lab).

    Pitch

    Recent papers have suggested this for data augmentation

    enhancement 
    opened by gabrieldernbach 3
  • [WIP] OpenMP support

    [WIP] OpenMP support

    Added : OpenMP support Restored : Epsilon and Debug mode Replaced : parmap => multiprocessing is now replace by multithreading

    Types of changes

    • [x] Docs change / refactoring / dependency upgrade
    • [x] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [x] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and context / Related issue

    How has this been tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document.
    • [ ] All tests passed, and additional code has been covered with new tests.
    opened by kguerda-idris 2
  • [WIP] Sliced Wasserstein distances : backend versions

    [WIP] Sliced Wasserstein distances : backend versions

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ X] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and context / Related issue

    Implementation of Sliced Optimal Trannsport distances with the new backend

    How has this been tested (if it applies)

    compared to scipy.wasserstein_distance and expected results because of close form solutions

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document.
    • [x] All tests passed, and additional code has been covered with new tests.
    opened by ncourty 2
  • Sinkhorn Divergence code does not match referenced paper

    Sinkhorn Divergence code does not match referenced paper

    Describe the bug

    I'm opening as a bug, but the code actually does work. The issue is the definition used for Sinkhorn Divergence. Basically, the Sinkhorn Divergence code does not match the formula in the referenced paper (["Learning Generative Models with Sinkhorn Divergences", 2017])(https://arxiv.org/pdf/1706.00292.pdf).

    Code sample and Expected behavior

    Here is a piece of the code from empirical_sinkhorn_divergence: sinkhorn_div = sinkhorn_loss_ab - 0.5 * (sinkhorn_loss_a + sinkhorn_loss_b).

    To match the sinkhorn divergence formula from the paper, the code should probably be: sinkhorn_div = 2* sinkhorn_loss_ab - (sinkhorn_loss_a + sinkhorn_loss_b).

    This is a minor issue, but perhaps the documentation should address this difference.

    Another issue is that the sinkhorn_loss returns

    W &= \min_\gamma <\gamma,M>_F + reg\cdot\Omega(\gamma)
    

    While in the paper, the sinkhorn cost is only

    W &= \gamma* <\gamma,M>_F,
    

    where \gamma* is the optimal plan for the regularized problem. In other words, the regularization term is only used to find the optimal plan and is then discarded.

    documentation help wanted 
    opened by davibarreira 3
  • [WIP] Using OpenMP for exact solver EMD

    [WIP] Using OpenMP for exact solver EMD

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and context / Related issue

    Using OpenMP to parallelize the exact solver EMD.

    How has this been tested (if it applies)

    Tested on Python/3.7+, gcc 6.5+

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document.
    • [ ] All tests passed, and additional code has been covered with new tests.
    opened by ncassereau-idris 0
  • Interpolated/partial transform?

    Interpolated/partial transform?

    🚀 Feature

    transform (and perhaps inverse_transform) should allow for interpolation (partial) transformation, given lambda (default=1).

    Motivation

    Interpolation allows one to seamlessly "morph" between two distributions.

    Pitch

    This is useful for all kinds of image processing tasks, where one does not want to fully transform a distribution, but gradually or partially transform the distribution.

    For example, consider this blog post where your toolkit is used to transform the color map from a day image onto a night image. Interpolated transport would allow this tranformation to happen gradually and generate a video.

    Alternatives

    Kludging the matrix and then running transform. Any guidance on how to do this would be greatly appreciated.

    Additional context

    Roma et al. (2020) describe this process for audio morphing: "Displacement interpolation is then accomplished by sliding through the non-zero entries of the transport matrix: given an interpolation parameter λ, each pair of masses in the matrix are interpolated to (1 − λ)xi + λyi and added to the output spectrum."

    Attached is an image from their work:

    image

    enhancement 
    opened by turian 0
  • Torch component structure

    Torch component structure

    Hi,

    I wanted to open a discussion about designing the structure that the torch component for POT should embrace. I am not sure that having the complete distinction numpy/torch is necessarily the best one as it implies a subcoverage of sorts.

    My personal preference would probably go towards an architecture which would be method oriented and not computational backend oriented, that is something along the lines of

    ├── ot │ ├── bregman # may also be subdivided into balanced, unbalanced, stabilized to cut down file length │ │ ├── numpy │ │ └── torch └── ...

    The tests being organised similarly.

    In this kind of architecture the emphasis would be put on the methods more than the implementation. This could also be all located into a _src folder, with a top level numpy, torch import for ease of access for the end user.

    What do you think?

    opened by AdrienCorenflos 0
  • Numerical issue of ot.emd with large entries

    Numerical issue of ot.emd with large entries

    Describe the bug

    It seems ot.emd fails to return an optimal plan (up to numerical precision) if there is large entries in the cost matrix (even if the optimal weight to put on these entries is 0).

    To Reproduce

    import numpy as np
    import ot
    
    M = np.array(
        [
            [2.50275352e02, 3.74653218e02, 2.41352736e03, 1.00000000e32, 1.51751540e-03],
            [2.13082030e02, 3.28812836e02, 2.29487946e03, 1.00000000e32, 1.37109800e-01],
            [1.97333083e02, 3.09175848e02, 2.24250550e03, 1.00000000e32, 2.46506283e00],
            [1.00000000e32, 1.00000000e32, 1.00000000e32, 5.26223432e00, 2.50000000e31],
            [3.84690152e01, 8.09465684e01, 3.33064175e02, 2.50000000e31, 0.00000000e00],
        ]
    )
    
    a = np.array([0.125, 0.125, 0.125, 0.125, 0.5])
    b = np.array([0.125, 0.125, 0.125, 0.125, 0.5])
    P = ot.emd(a=a, b=b, M=M, numItermax=2000000)
    Q = np.array(
        [
            [0, 0, 0, 0, 0.125],
            [0, 0, 0, 0, 0.125],
            [0, 0, 0, 0, 0.125],
            [0, 0, 0, 0.125, 0],
            [0.125, 0.125, 0.125, 0, 0.125],
        ]
    )
    assert (P.sum(axis=0) == a).all()
    assert (P.sum(axis=1) == a).all()
    assert (Q.sum(axis=0) == a).all()
    assert (Q.sum(axis=1) == a).all()
    print("my cost matrix:\n", Q)
    print("POT matrix:\n", P)
    print("POT cost:", np.sum(np.multiply(P, M)))
    print("my cost:", np.sum(np.multiply(Q, M)))
    

    returns:

    my cost matrix:
     [[0.    0.    0.    0.    0.125]
     [0.    0.    0.    0.    0.125]
     [0.    0.    0.    0.    0.125]
     [0.    0.    0.    0.125 0.   ]
     [0.125 0.125 0.125 0.    0.125]]
    POT matrix:
     [[0.    0.125 0.    0.    0.   ]
     [0.125 0.    0.    0.    0.   ]
     [0.    0.    0.125 0.    0.   ]
     [0.    0.    0.    0.125 0.   ]
     [0.    0.    0.    0.    0.5  ]]
    POT cost: 354.43787279000003
    my cost: 57.54321038317501
    

    Expected behavior

    ot.emd should return (up to numerical precision) a transport plan (at least) as good as the Q I manually propose.

    Environment (please complete the following information):

    • OS (e.g. MacOS, Windows, Linux): Ubuntu 20.04
    • Python version: 3.7
    • How was POT installed (source, pip, conda): conda

    Output of the following code snippet:

    >>> import platform; print(platform.platform())
    Linux-5.4.0-70-generic-x86_64-with-debian-bullseye-sid
    >>> import sys; print("Python", sys.version)
    Python 3.7.4 (default, Aug 13 2019, 20:35:49) 
    [GCC 7.3.0]
    >>> import numpy; print("NumPy", numpy.__version__)
    NumPy 1.16.4
    >>> import scipy; print("SciPy", scipy.__version__)
    SciPy 1.3.1
    >>> import ot; print("POT", ot.__version__)
    POT 0.7.0
    

    Additional context

    As shown, I set numIterMax at 2000000 and didn't get any warning (and the code run fast) so the algorithm does converge.

    bug help wanted 
    opened by tlacombe 11
  • [WIP] torch implementation of the Sliced Wasserstein Distance

    [WIP] torch implementation of the Sliced Wasserstein Distance

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Motivation and context / Related issue

    Torch implementation of the SWD (or sliced OT loss? how do you want to call it?)

    https://github.com/PythonOT/POT/issues/225

    How has this been tested (if it applies)

    Added a few unittests, needs to be tested further (WIP)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document.
    • [ ] All tests passed, and additional code has been covered with new tests.
    opened by AdrienCorenflos 15
Owner
Md. Al-Amin
Open Source Contributor, Competitive Programmer, Security Researcher, Traveller
Md. Al-Amin
Rust implementation of the PTHash perfect hash function for static compile-time generated hash tables

QuickPHF QuickPHF is a Rust implementation of the PTHash minimal perfect hash function algorithm. It consists of two crates: quickphf - runtime code f

Darko Trifunovski 11 Oct 20, 2023
A simple MD5 implementation with a focus on buffered reading

md5-rs A simple MD5 implementation with a focus on buffered reading, and is completely no_std. This shouldn't be used in any security-critical softwar

Magnetar 3 Mar 25, 2022
Port path module (and tests) of nodejs to rust using the same algorithms.

rusty_nodejs_path Port path module (and tests) of nodejs to rust using the same algorithms. crates.io Documents Progress posix path.basename(path[, ex

Yunfei He 10 Sep 25, 2022
Collection of cryptographic hash functions written in pure Rust

RustCrypto: hashes Collection of cryptographic hash functions written in pure Rust. All algorithms reside in the separate crates and implemented using

Rust Crypto 1.2k Jan 8, 2023
the official Rust and C implementations of the BLAKE3 cryptographic hash function

BLAKE3 is a cryptographic hash function that is: Much faster than MD5, SHA-1, SHA-2, SHA-3, and BLAKE2. Secure, unlike MD5 and SHA-1. And secure again

BLAKE3 team 3.7k Jan 6, 2023
Highly modular & configurable hash & crypto library

Octavo Highly modular & configurable hash & crypto library written in pure Rust. Installation [dependencies] octavo = { git = "https://github.com/libO

Octavo Developers 139 Dec 29, 2022
Reference implementation for the Poseidon Snark-friendly Hash algorithm.

Dusk-Poseidon Reference implementation for the Poseidon Hashing algorithm. Reference Starkad and Poseidon: New Hash Functions for Zero Knowledge Proof

Dusk Network 96 Jan 2, 2023
paq files to hash.

paq paq files to hash. Hash a single file or all files in directory recursively. Installation Requires cargo. Run cargo install paq. Usage Run paq [sr

gregory langlais 3 Oct 10, 2022
Fastmurmur3 - Fast non-cryptographic hash, with the benchmarks to prove it.

Fastmurmur3 Murmur3 is a fast, non-cryptographic hash function. fastmurmur3 is, in my testing, the fastest implementation of Murmur3. Usage let bytes:

Kurt Wolf 13 Dec 2, 2022
An implementation of Jakobsson's Fractal Hash Sequence Traversal algorithm

fractal-hash-traversal An implementation of Jakobsson's Fractal Hash Sequence Traversal algorithm. There is at least one hash traversal algorithm that

Dan Cline 1 Jan 12, 2022
computed data's hash by webAssembly

wasm-hasher computed data's hash by webAssembly support md5,sha1,sha2-224,sha2-356,sha2-384,sha2-512,sha3-224,sha3-256,sha3-384,sha3-512,china-sm3 typ

fuyoo 2 Oct 13, 2022
Generates a unique hash/identifier for a system given a set of parameters.

uniqueid ?? Generates a unique hash/identifier for a system given a set of parameters. Example usage use uniqueid; pub fn main() { let data = vec

Checksum 2 Aug 19, 2022
Left To My Own Devices - NT hash tools

ntcrack Left To My Own Devices - NT cracker A full writeup of how it works is available at the SensePost blog Invocation ./ntcrack <input hashlist> <w

SensePost 24 Nov 24, 2022
Blazing fast Pedersen hash implementation for Node.JS

pedersen-fast Blazing fast Pedersen hash implementation for Node.JS Exposes starknet-crypto's implementation written in Rust as WASM package. Usage np

L2BEAT 7 Mar 10, 2023
Hash trait that is object-safe

Hash trait that is object-safe This crate provides a DynHash trait that can be used in trait objects. Types that implement the standard library's std:

David Tolnay 19 Nov 12, 2023
L2 validity rollup combined with blind signatures over elliptic curves inside zkSNARK, to provide offchain anonymous voting with onchain binding execution on Ethereum

blind-ovote Blind-OVOTE is a L2 voting solution which combines the validity rollup ideas with blind signatures over elliptic curves inside zkSNARK, to

Aragon ZK Research 3 Nov 18, 2022
Binding generator for EVM and ink!

Sumi is a binding generator specifically designed for Astar Network ecosystem with XVM in mind. It takes EVM metadata and converts it to an ink! modul

Dmitry 4 Dec 15, 2022
Russh Node.js binding

@napi-rs/ssh ?? Help me to become a full-time open-source developer by sponsoring me on Github Usage import { connect, checkKnownHosts } from '@napi-r

LongYinan 21 Mar 4, 2023
A value transfer bridge between the Monero blockchain and the Secret Network.

Secret-Monero-Bridge A value transfer bridge between the Monero blockchain and the Secret Network. Proof-of-Concept Video Demonstration: https://ipfs.

null 28 Dec 7, 2022