Encode and decode dynamically constructed values of arbitrary shapes to/from SCALE bytes

Overview

scale-value · CI Status Latest Version on Crates.io Released API docs

This crate provides a Value type, which is a runtime representation that is compatible with scale_info::TypeDef. It somewhat analogous to a serde_json::Value, which is a runtime representation of JSON values, but with a focus on SCALE encoded values instead of JSON encoded values. Unlike JSON however, SCALE encoding is not self describing, and so we need additional type information to tell us how to encode and decode values.

It is expected that this crate will commonly be used in conjunction with the scale-info and frame-metadata crates.

The scale-info crate allows us to define types and add them to a type registry, which in turn is used to tell us how to SCALE encode and decode Values.

The frame-metadata crate contains all of the type information we need in order to be able to SCALE encode and decode Values into the various parameters needed in extrinsics and such.

Crate features (enabled by default):

  • serde: Allow Values to be converted from and to static Rust types (where possible), or serialized and deserialized to other formats like JSON, via serde.
  • from_string: Allow strings to be parsed into Values using the same format from which values can be converted to strings via .to_string(). Examples:
    • Boolean types parse from true and false.
    • Strings and chars are supported with "Hello\n there" and 'a'.
    • Numbers like 1_234_567 and -123 are supported.
    • Composite types (structs/tuples) look like { hello: 123, "there": true } and ('a', 'b', true).
    • Finally, enum variants look like Hello { foo: 1, bar: 2 } and Foo(1,2,3).

Examples

Manually creating a type registry, and then using it to SCALE encode and decode some runtime constructed Value type to/from SCALE bytes.

// Turn a type into an ID and type registry using `scale-info`:
fn make_type<T: scale_info::TypeInfo + 'static>() -> (u32, scale_info::PortableRegistry) {
    let m = scale_info::MetaType::new::<T>();
    let mut types = scale_info::Registry::new();
    let id = types.register_type(&m);
    let portable_registry: scale_info::PortableRegistry = types.into();
    (id.id(), portable_registry)
}

// Some type which we have derived SCALE type information about:
#[derive(scale_info::TypeInfo)]
enum Foo {
    A { is_valid: bool, name: String }
}

// We can build a type registry containing just this type:
let (type_id, registry) = make_type::<Foo>();
use scale_value::Value;

// Next, we can construct a runtime value of a similar shape:
let value = Value::named_variant("A", vec![
    ("is_valid".into(), Value::bool(true)),
    ("name".into(), Value::string("James")),
]);

// Given the type registry and ID, we can try to convert our Value into SCALE bytes:
let mut bytes = Vec::new();
scale_value::scale::encode_as_type(value.clone(), type_id, &registry, &mut bytes).unwrap();

// We can also go the other way, and decode out bytes back into the same Value:
let new_value = scale_value::scale::decode_as_type(&mut &*bytes, type_id, &registry).unwrap();

// The two values should equal each other (`.remove_context()` just removes the additional
// type information handed back when the value is decoded):
assert_eq!(value, new_value.remove_context());

Using the serde feature to convert a Value to/from some rust type via serde:

use scale_value::Value;
use serde::{ Serialize, Deserialize };

// Some type we want to be able to serialize/deserialize:
#[derive(Serialize, Deserialize, PartialEq, Debug)]
enum Foo {
    A { is_valid: bool, name: String },
    B(u8, bool)
}

// First, serialize a Value into the rust type:
let value = Value::named_variant("A", vec![
    ("name".into(), Value::string("James")),
    ("is_valid".into(), Value::bool(true)),
]);
let foo1: Foo = scale_value::serde::from_value(value).unwrap();
assert_eq!(foo1, Foo::A { is_valid: true, name: "James".into() });

// Next, deserialize the rust type back into a Value:
let new_value = scale_value::serde::to_value(foo).unwrap();
assert_eq!(value, new_value);

Check out the documentation for a full API reference and more examples.

Comments
  • value! macro for creating Value structs

    value! macro for creating Value structs

    Closes #21

    Supports unnamed and named composites and variants:

    use scale_value::value;
    
    let val = value!({
        name: "localhost",
        address: V4(127, 0, 0, 1),
        payload: {
            bytes:  (255, 3, 4, 9),
            method: ("Post", 3000),
        },
    });
    
    

    Values can be nested in each other:

    use scale_value::value;
    
    let data_value = value!((1, v1(1, 2), 3));
    let val = value!(POST { data: data_value });
    

    Trailing commas are optional

    In a lot of places I replaced code in tests with the macro which also gives us a good check that everything is working fine. There is only one thing that is not working yet, but it should be rare anyway: keys in named composites that have spaces in them.

    opened by tadeohepperle 4
  • ValueDefinition support Debug

    ValueDefinition support Debug

    It would be good to be able to display the structure of the type if debug formatted so that people can see how they might match against the structure.

    opened by gilescope 4
  • encode: Encode nested tuples properly

    encode: Encode nested tuples properly

    This PR ensures that nested tuples inside sequences can be encoded properly.

    Prior to this encoding a Vec<(u8, u16)> would not work properly with dynamic values of

    Value::unnamed_composite(
            vec![
                Value::u128(3u8.into()),
                Value::u128(1u16.into()),
            ]
        );
    

    This is because find_sequence_candidate function did assume that the last type of the value is the "end-type" of the sequence.

    Instead, if we deal with tuple/composite inside a sequence (sequence or array otherwise), then recurse back to encode_composite with proper parameters (unflattened).

    Detected by: https://github.com/paritytech/subxt/issues/982.

    // @paritytech/subxt-team

    opened by lexnv 2
  • Remove/alter U256/I256 types

    Remove/alter U256/I256 types

    These types currently exist because they exist in scale-info, but I expect they aren't super commonly used (?), and think it would be more ergonomic/simplify the code a bit to deserialize anything matching that type into a Composite::Unnamed (ie a Vec<Value<T>> where each value is a Primitive::U128 (unsigned number type).

    This type is obviously much larger and no longer allocation free, but comes with a couple of advantages:

    • we can remove some of our special handling that has to look at composites or U256/I256 values when encoding/decoding from sequences.
    • currently, if you serde::serialize a U256/I256 it'll serialize into a sequence of bytes, but then if you try deserializing that sequence of bytes again you get a Composite::Unnamed back (it's the catch-all sequence deserialize target). Removing U256/I256 leaves us with exactly one sequence target and so no ambiguity, so things can go back and forth without any issue).
    • U256/I256 types can't be nicely worked with in rust anyway, so having a special case for them feels slightly odd. If they are super common then having a special case makes sense, but otherwise I don't think it does.

    (side note: bit sequences are just vectors of bools; does it makes sense to also "de-optimise" those and remove the special BitSequence type?)

    opened by jsdw 1
  • Support parsing strings to Values and converting Values to strings

    Support parsing strings to Values and converting Values to strings

    cargo-contract has a format called SCON (https://github.com/paritytech/cargo-contract/blob/master/src/cmd/extrinsics/transcode/scon/parse.rs) that runtime values can be parsed from and to. We could add something like this here behind a feature flag.

    Take inspiration from the above, but offhand I'm thinking something like:

    • Numbers like 1234 or 1_234_567 or -1234 or -12_345 parse into our U128/I128 primitives.
    • Strings in double quotes with \" to escape any inner double quotes and \\ to escape the backslashes? may want to support \n, \t and such too?
    • chars in single quotes eg 'a', 'b'.
    • true and false for bools.
    • named composites can encode to something like { name: 1_234, other: false }. Could accept double quotes around field names to accept arbitrary strings eg { "a weird name": 1_234, other: false } (though I wouldn't expect names that weren't compatible with rust struct field names).
    • unnamed composites can be tupleish like (1_234, false). Could accept square brackets, but I like how this relates/works nicely when used as variants too (see next point), so would be inclined not to.
    • named and unnamed variants could be as above but preceeded with variant name eg MyVariant { name: 1_234, other: false } or MyVariant(1_234, false).
    • bit sequences could be something like <110101011001>? These will be uncommon I'd guess, and bool composites will encode to types expecting bit sequences anyway, so <001> would SCALE encode (but not serialize) the same as (false, false, true).
    opened by jsdw 1
  • Tweak composite encoding to guarantee success if initial types match

    Tweak composite encoding to guarantee success if initial types match

    After reflecting on #31, I thought it might be better to try and simplify the method that we use to encode Values.

    Previously, we'd immediately try to unwrap things that looked like newtype wrappers from the TypeInfo and in most cases from the Value. Things like sequences cause us issues here; a sequence of 1 value looks just like a newtype wrapper. Another issue here is that we've changed things before we ever try to encode, and so it was found that some Values which perfectly aligned with the intended TypeInfo already were rejected.

    Now, when we encounter a composite type we will:

    1. Try to encode everything as-is without changing anything. This ensures that if the user provides a Value that lines up with the expected target type, it should always encode successfully.
    2. If that fails, remove any "newtype" like wrappings from the TypeInfo we're targeting and try to encode our Value to that. if the Value has already ignored all newtype things, this should now work
    3. If that fails, unwrap one layer of the Value at a time (as long as we have layers that look like possible newtype wrappings) and try to encode again until we have nothing left to remove. This is the slowest path, since we will try encoding more than once at different depths, but is our best effort to find some version of the provided Value that lines up with the encode target.

    If nothing is successful, we'll always return the error from 1 so that it makes sense given what the user provided.

    opened by jsdw 0
  • Add hex and ss58 custom parsers

    Add hex and ss58 custom parsers

    These parsers are available via scale_value::stringify::custom_parsers and allow hex and ss58 addresses to be correctly parsed as part of a string Value.

    opened by jsdw 0
  • Improve stringifying and add support for custom parsers

    Improve stringifying and add support for custom parsers

    We add an "escape hatch" to allow users to parse custom strings into Value's in addition to the usual parsing logic. This allows custom logic to be inserted to cover specific cases, like parsing from hex strings or ss58 addresses.

    opened by jsdw 0
  • Use scale-encode and scale-decode, and enable zero-copy decoding via Visitor trait

    Use scale-encode and scale-decode, and enable zero-copy decoding via Visitor trait

    Part of a series:

    • scale-encode: new crate to handle encoding using type info.
    • scale-decode: handle decoding using type info.
    • !scale-value: update Value type to use scale-encode/decode.
    • subxt: Start taking advantage of these in Subxt.

    This PR:

    • Leans on scale-encode to encode Values.
    • Update Visitor impl in new scale-decode and ensure Value also impls DecodeAsType and DecodeAsFields/EncodeAsFields where appropriate.

    The net effect of this is that Values share much of their encode and decode logic with every other static type now.

    opened by jsdw 0
  • use scale-bits for BitSequence decoding etc and enable WASM test

    use scale-bits for BitSequence decoding etc and enable WASM test

    We've consolidated bit sequence handling into the scale-bits crate, so lean on that and remove encode/decode logic from here.

    Also add a CI check that we can compile the crate to WASM.

    And for the sake of expediency I've also prepped for the next release.

    Closes #2

    opened by jsdw 0
  • Use scale-decode for Value decoding.

    Use scale-decode for Value decoding.

    This moves the decoding logic out into a separate crate, scale-decode.

    (As a side effect I slightly simplified how compact values are decoded to remove some nesting, since it felt unnecessary.)

    opened by jsdw 0
  • Should we support decoding into Values from pre-V14 metadata?

    Should we support decoding into Values from pre-V14 metadata?

    Currently we can decode things using V14 metadata, since V14 metadata contains the necessary type information.

    Pre-V14, I think we need to support type-mapping information (which maps from type-name as seen in metadata to something we can understand. Each pre-V14 metadata differs a bit, and so we may end up needing some wrapper TypeRegistry<'a> sort of type that a &PortableRegistry can be converted into easily, and pre-V14 metadatas + type mappings could convert into also.

    Desub has some logic for this already to decode blocks, so hopefully we could borrow or draw inspiration form how that works.

    I suspect that adding this support could be rather complex and time consuming, so we need to weigh up whether it's worth while, and if so, should do a preliminary investigation into how complex/time-consuming before embarking on any actual work.

    opened by jsdw 2
  • Either support parsing to/from U256/I256 or remove the variants

    Either support parsing to/from U256/I256 or remove the variants

    Currently we support decoding and encoding into U256/I256 variants, but both are just represented as arrays of bytes.

    Serializing values containing U256/I256's via serde isn't great; they serialize into byte sequences. Deserializing those leaves you with composites; no deserialize input can get out U256/I256; so unlike other types, they don't convert back and forth nicely.

    With #7 we cannot yet stringify U256/I256 and we cannot parse strings into them. Parsing isn't as important (we get back a nice error if a numeric string is too big to fit U128/I128) but calling .to_string() on a value containing a U256/I258 will panic. Whatever we stringify should be parsable back again, so we need to add both or neither.

    So, do we:

    1. figure out how to parse and stringify U256/I256 (my vote; accept numeric strings and parse them into the smallest of U256/I256 or U128/I128 that they will fit into), or
    2. remove support for U256/I256 entirely, since they are, from what I gather, unused on the whole (see https://github.com/paritytech/scale-info/pull/25 for history). Etherum needs them, but parity-scale-codec doesn't support encoding/decoding into them, and without manual TypeInfo impls they will never be seen in metadata.
    opened by jsdw 4
Releases(v0.10.0)
  • v0.10.0(May 31, 2023)

  • v0.9.0(May 30, 2023)

  • v0.8.1(May 30, 2023)

  • v0.8.0(May 11, 2023)

    0.8.0

    This release:

    • Bumps to using scale-info 2.5.0 and uses field rather than method accessors as introduced by that change.
    • Introduces scale_value::stringify::from_str_custom(), which allows you to construct a Value parser that can inject custom parsing logic while parsing strings into Values.
    • Adds two new custom parsers in a new stringify::custom_parsers module for parsing hex values and ss58 addresses. These can be used in conjunction with the above.
    • Fixes a small bug in stringifying so that field and enum idents are no longer quoted unless necessary; this will make the output prettier.

    There should be no breaking API changes.

    Added

    • Add hex and ss58 custom parsers. (#29)
    • Improve stringifying and add support for custom parsers. (#26)
    Source code(tar.gz)
    Source code(zip)
  • v0.7.0(Mar 13, 2023)

    0.7.0

    The main change in this release is that it makes use of a new scale-encode crate and updated scale-decode crate for the SCALE encoding and decoding of Values.

    • Values now implement DecodeAsType and EncodeAsType.
    • Composites now implement DecodeAsFields.
    • As a small breaking API change, the TypeId passed to the encode and decode methods is now a plain u32 for simplicity, rather than a newtype struct.

    It should be very straightforward to update to this release as the changes are mainly additive in nature.

    Changed

    • Use latest scale-decode and new scale-encode crate for SCALE encoding and decoding Values. (#25)
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Sep 21, 2022)

    Here we move to scale_bits from bitvec to handle our encode/decode logic and provide a simple type to decode bits into. We also now add a WASM CI test, and will expect this crate (potentially via features in the future) to be WASM compatible.

    Changed

    • Use scale-bits for BitSequence decoding etc and enable WASM test. (#24
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Jun 27, 2022)

    0.2.1

    Fixed

    • Fix compile error on 32-bit architectures owing to BitVec not supporting a store type of u64 on them. Also fix an internal naming mixup w.r.t bitvec types. ((#12)[https://github.com/paritytech/scale-value/pull/12])
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jun 14, 2022)

    0.2.0

    Added

    • Added a string syntax for values, and the ability to parse Value's from strings or encode them into strings (see the new stringify module exposed at the crate root). Parsing from strings requires the from_string feature to be enabled. (#7)
    Source code(tar.gz)
    Source code(zip)
  • v0.1.0(May 23, 2022)

    0.1.0

    The initial release.

    Added

    • Added a Value type that can be SCALE encoded and decoded using a scale_info::PortableRegistry, as well as serialized and deserialized to things via serde. (#1)
    Source code(tar.gz)
    Source code(zip)
Owner
Parity Technologies
Solutions for a trust-free world
Parity Technologies
A fast bump allocator that supports allocation scopes / checkpoints. Aka an arena for values of arbitrary types.

bump-scope A fast bump allocator that supports allocation scopes / checkpoints. Aka an arena for values of arbitrary types. What is bump allocation? A

null 7 May 4, 2024
decode a byte stream of varint length-encoded messages into a stream of chunks

length-prefixed-stream decode a byte stream of varint length-encoded messages into a stream of chunks This crate is similar to and compatible with the

James Halliday 4 Feb 26, 2022
Deadliner helps you keep track of the time left for your deadline by dynamically updating the wallpaper of your desktop with the time left.

Deadliner Watch the YouTube video What's Deadliner? Deadliner is a cross-platform desktop application for setting deadline for a project and keeping t

Deadliner 34 Dec 16, 2022
Decode URLs in your files!

urldecoder English | 简体中文 A tool to batch decode URLs in your files. A toy project written in Rust. Decoding URLs shortens the string length and incre

Absolutex 3 Feb 18, 2024
Owned container for dynamically-sized types backed by inline memory

sized-dst This crate provides Dst, an owned container for dynamically-sized types (DSTs) that's backed by inline memory. The main use-case is owned tr

Yuhan Lin 8 Sep 6, 2024
Blazingly fast interpolated LUT generator and applicator for arbitrary and popular color palettes.

lutgen-rs A blazingly fast interpolated LUT generator and applicator for arbitrary and popular color palettes. Theme any image to your dekstop colorsc

null 12 Jun 16, 2023
A library that allows for the arbitrary inspection and manipulation of the memory and code of a process on a Linux system.

raminspect raminspect is a crate that allows for the inspection and manipulation of the memory and code of a running process on a Linux system. It pro

Liam Germain 24 Sep 26, 2023
Codemod - Codemod is a tool/library to assist you with large-scale codebase refactors that can be partially automated but still require human oversight and occasional intervention

Codemod - Codemod is a tool/library to assist you with large-scale codebase refactors that can be partially automated but still require human oversight and occasional intervention. Codemod was developed at Facebook and released as open source.

Meta Archive 4k Dec 29, 2022
Cost saving K8s controller to scale down and up of resources during non-business hours

Kube-Saver Motivation Scale down cluster nodes by scaling down Deployments, StatefulSet, CronJob, Hpa during non-business hours and save $$, but if yo

Mahesh Rayas 5 Aug 15, 2022
Holo is a suite of routing protocols designed to support high-scale and automation-driven networks.

Holo is a suite of routing protocols designed to support high-scale and automation-driven networks. For a description of what a routing protocol is, p

Renato Westphal 42 Apr 16, 2023
Patch binary file using IDA signatures and defined replacement bytes in YAML.

fabricbin Patch binary file using IDA signatures and defined replacement bytes in YAML. Install: cargo install --git https://github.com/makindotcc/fab

makin 3 Oct 24, 2023
Base 32 + 64 encoding and decoding identifiers + bytes in rust, quickly

fast32 Base32 and base64 encoding in Rust. Primarily for integer (u64, u128) and UUID identifiers (behind feature uuid), as well as arbitrary byte arr

Chris Rogus 9 Dec 18, 2023
Traversal of tree-sitter Trees and any arbitrary tree with a TreeCursor-like interface

tree-sitter-traversal Traversal of tree-sitter Trees and any arbitrary tree with a TreeCursor-like interface. Using cursors, iteration over the tree c

Sebastian Mendez 12 Jan 8, 2023
AI-TOML Workflow Specification (aiTWS), a comprehensive and flexible specification for defining arbitrary Ai centric workflows.

AI-TOML Workflow Specification (aiTWS) The AI-TOML Workflow Specification (aiTWS) is a flexible and extensible specification for defining arbitrary wo

ruv 20 Apr 8, 2023
hexyl is a simple hex viewer for the terminal. It uses a colored output to distinguish different categories of bytes

hexyl is a simple hex viewer for the terminal. It uses a colored output to distinguish different categories of bytes (NULL bytes, printable ASCII characters, ASCII whitespace characters, other ASCII characters and non-ASCII).

David Peter 7.3k Dec 29, 2022
A terminal UI to edit bytes by the nibble.

heh The HEx Helper is a cross-platform terminal UI used for modifying file data in hex or ASCII. It aims to replicate some of the look of hexyl while

nathan 285 Dec 18, 2022
A rust crate to view a structure as raw bytes (&[u8])

rawbytes A Rust crate to view a structure as a plain byte array (&[u8]). Super simple. Tiny. Zero dependencies. This is a safer interface to slice::fr

Frank Denis 4 Sep 7, 2023
a universal meta-transliterator that can decipher arbitrary encoding schemas, built in pure Rust

transliterati a universal meta-transliterator that can decipher arbitrary encoding schemas, built in pure Rust what does it do? You give it this: Барл

Catherine Koshka 7 Dec 21, 2022
Abuse the node.js inspector mechanism in order to force any node.js/electron/v8 based process to execute arbitrary javascript code.

jscythe abuses the node.js inspector mechanism in order to force any node.js/electron/v8 based process to execute arbitrary javascript code, even if t

Simone Margaritelli 301 Jan 4, 2023