Compiler infrastructure and toolchain library for WebAssembly




Binaryen is a compiler and toolchain infrastructure library for WebAssembly, written in C++. It aims to make compiling to WebAssembly easy, fast, and effective:

  • Easy: Binaryen has a simple C API in a single header, and can also be used from JavaScript. It accepts input in WebAssembly-like form but also accepts a general control flow graph for compilers that prefer that.

  • Fast: Binaryen's internal IR uses compact data structures and is designed for completely parallel codegen and optimization, using all available CPU cores. Binaryen's IR also compiles down to WebAssembly extremely easily and quickly because it is essentially a subset of WebAssembly.

  • Effective: Binaryen's optimizer has many passes (see an overview later down) that can improve code size and speed. These optimizations aim to make Binaryen powerful enough to be used as a compiler backend by itself. One specific area of focus is on WebAssembly-specific optimizations (that general-purpose compilers might not do), which you can think of as wasm minification, similar to minification for JavaScript, CSS, etc., all of which are language-specific.

Compilers using Binaryen include:

  • AssemblyScript which compiles a variant of TypeScript to WebAssembly
  • wasm2js which compiles WebAssembly to JS
  • Asterius which compiles Haskell to WebAssembly
  • Grain which compiles Grain to WebAssembly

Binaryen also provides a set of toolchain utilities that can

  • Parse and emit WebAssembly. In particular this lets you load WebAssembly, optimize it using Binaryen, and re-emit it, thus implementing a wasm-to-wasm optimizer in a single command.
  • Interpret WebAssembly as well as run the WebAssembly spec tests.
  • Integrate with Emscripten in order to provide a complete compiler toolchain from C and C++ to WebAssembly.
  • Polyfill WebAssembly by running it in the interpreter compiled to JavaScript, if the browser does not yet have native support (useful for testing).

Consult the contributing instructions if you're interested in participating.

Binaryen IR

Binaryen's internal IR is designed to be

  • Flexible and fast for optimization.
  • As close as possible to WebAssembly so it is simple and fast to convert it to and from WebAssembly.

There are a few differences between Binaryen IR and the WebAssembly language:

  • Tree structure
    • Binaryen IR is a tree, i.e., it has hierarchical structure, for convenience of optimization. This differs from the WebAssembly binary format which is a stack machine.
    • Consequently Binaryen's text format allows only s-expressions. WebAssembly's official text format is primarily a linear instruction list (with s-expression extensions). Binaryen can't read the linear style, but it can read a wasm text file if it contains only s-expressions.
    • Binaryen uses Stack IR to optimize "stacky" code (that can't be represented in structured form).
    • When stacky code must be represented in Binaryen IR, such as with multivalue instructions and blocks, it is represented with tuple types that do not exist in the WebAssembly language. In addition to multivalue instructions, locals and globals can also have tuple types in Binaryen IR but not in WebAssembly. Experiments show that better support for multivalue could enable useful but small code size savings of 1-3%, so it has not been worth changing the core IR structure to support it better.
    • Block input values (currently only supported in catch blocks in the exception handling feature) are represented as pop subexpressions.
  • Types and unreachable code
    • WebAssembly limits block/if/loop types to none and the concrete value types (i32, i64, f32, f64). Binaryen IR has an unreachable type, and it allows block/if/loop to take it, allowing local transforms that don't need to know the global context. As a result, Binaryen's default text output is not necessarily valid wasm text. (To get valid wasm text, you can do --generate-stack-ir --print-stack-ir, which prints Stack IR, this is guaranteed to be valid for wasm parsers.)
    • Binaryen ignores unreachable code when reading WebAssembly binaries. That means that if you read a wasm file with unreachable code, that code will be discarded as if it were optimized out (often this is what you want anyhow, and optimized programs have no unreachable code anyway, but if you write an unoptimized file and then read it, it may look different). The reason for this behavior is that unreachable code in WebAssembly has corner cases that are tricky to handle in Binaryen IR (it can be very unstructured, and Binaryen IR is more structured than WebAssembly as noted earlier). Note that Binaryen does support unreachable code in .wat text files, since as we saw Binaryen only supports s-expressions there, which are structured.
  • Blocks
    • Binaryen IR has only one node that contains a variable-length list of operands: the block. WebAssembly on the other hand allows lists in loops, if arms, and the top level of a function. Binaryen's IR has a single operand for all non-block nodes; this operand may of course be a block. The motivation for this property is that many passes need special code for iterating on lists, so having a single IR node with a list simplifies them.
    • As in wasm, blocks and loops may have names. Branch targets in the IR are resolved by name (as opposed to nesting depth). This has 2 consequences:
      • Blocks without names may not be branch targets.
      • Names are required to be unique. (Reading .wat files with duplicate names is supported; the names are modified when the IR is constructed).
    • As an optimization, a block that is the child of a loop (or if arm, or function toplevel) and which has no branches targeting it will not be emitted when generating wasm. Instead its list of operands will be directly used in the containing node. Such a block is sometimes called an "implicit block".
  • Reference Types
  • The wasm text and binary formats require that a function whose address is taken by ref.func must be either in the table, or declared via an (elem declare func $..). Binaryen will emit that data when necessary, but it does not represent it in IR. That is, IR can be worked on without needing to think about declaring function references.

As a result, you might notice that round-trip conversions (wasm => Binaryen IR => wasm) change code a little in some corner cases.

  • When optimizing Binaryen uses an additional IR, Stack IR (see src/wasm-stack.h). Stack IR allows a bunch of optimizations that are tailored for the stack machine form of WebAssembly's binary format (but Stack IR is less efficient for general optimizations than the main Binaryen IR). If you have a wasm file that has been particularly well-optimized, a simple round-trip conversion (just read and write, without optimization) may cause more noticeable differences, as Binaryen fits it into Binaryen IR's more structured format. If you also optimize during the round-trip conversion then Stack IR opts will be run and the final wasm will be better optimized.

Notes when working with Binaryen IR:

  • As mentioned above, Binaryen IR has a tree structure. As a result, each expression should have exactly one parent - you should not "reuse" a node by having it appear more than once in the tree. The motivation for this limitation is that when we optimize we modify nodes, so if they appear more than once in the tree, a change in one place can appear in another incorrectly.
  • For similar reasons, nodes should not appear in more than one functions.


Binaryen intrinsic functions look like calls to imports, e.g.,

(import "binaryen-intrinsics" "foo" (func $foo))

Implementing them that way allows them to be read and written by other tools, and it avoids confusing errors on a binary format error that could happen in those tools if we had a custom binary format extension.

An intrinsic method may be optimized away by the optimizer. If it is not, it must be lowered before shipping the wasm, as otherwise it will look like a call to an import that does not exist (and VMs will show an error on not having a proper value for that import). That final lowering is not done automatically. A user of intrinsics must run the pass for that explicitly, because the tools do not know when the user intends to finish optimizing, as the user may have a pipeline of multiple optimization steps, or may be doing local experimentation, or fuzzing/reducing, etc. Only the user knows when the final optimization happens before the wasm is "final" and ready to be shipped. Note that, in general, some additional optimizations may be possible after the final lowering, and so a useful pattern is to optimize once normally with intrinsics, then lower them away, then optimize after that, e.g.:

wasm-opt input.wasm -o output.wasm  -O --intrinsic-lowering -O

Each intrinsic defines its semantics, which includes what the optimizer is allowed to do with it and what the final lowering will turn it to. See intrinsics.h for the detailed definitions. A quick summary appears here:

  • call.without.effects: Similar to a call_ref in that it receives parameters, and a reference to a function to call, and calls that function with those parameters, except that the optimizer can assume the call has no side effects, and may be able to optimize it out (if it does not have a result that is used, generally).


This repository contains code that builds the following tools in bin/:

  • wasm-opt: Loads WebAssembly and runs Binaryen IR passes on it.
  • wasm-as: Assembles WebAssembly in text format (currently S-Expression format) into binary format (going through Binaryen IR).
  • wasm-dis: Un-assembles WebAssembly in binary format into text format (going through Binaryen IR).
  • wasm2js: A WebAssembly-to-JS compiler. This is used by Emscripten to generate JavaScript as an alternative to WebAssembly.
  • wasm-reduce: A testcase reducer for WebAssembly files. Given a wasm file that is interesting for some reason (say, it crashes a specific VM), wasm-reduce can find a smaller wasm file that has the same property, which is often easier to debug. See the docs for more details.
  • wasm-shell: A shell that can load and interpret WebAssembly code. It can also run the spec test suite.
  • wasm-emscripten-finalize: Takes a wasm binary produced by llvm+lld and performs emscripten-specific passes over it.
  • wasm-ctor-eval: A tool that can execute C++ global constructors ahead of time. Used by Emscripten.
  • binaryen.js: A standalone JavaScript library that exposes Binaryen methods for creating and optimizing WASM modules. For builds, see binaryen.js on npm (or download it directly from github, rawgit, or unpkg).

Usage instructions for each are below.

Binaryen Optimizations

Binaryen contains a lot of optimization passes to make WebAssembly smaller and faster. You can run the Binaryen optimizer by using wasm-opt, but also they can be run while using other tools, like wasm2js and wasm-metadce.

  • The default optimization pipeline is set up by functions like addDefaultFunctionOptimizationPasses.
  • There are various pass options that you can set, to adjust the optimization and shrink levels, whether to ignore unlikely traps, inlining heuristics, fast-math, and so forth. See wasm-opt --help for how to set them and other details.

See each optimization pass for details of what it does, but here is a quick overview of some of the relevant ones:

  • CoalesceLocals - Key “register allocation” pass. Does a live range analysis and then reuses locals in order to minimize their number, as well as to remove copies between them.
  • CodeFolding - Avoids duplicate code by merging it (e.g. if two if arms have some shared instructions at their end).
  • CodePushing - “Pushes” code forward past branch operations, potentially allowing the code to not be run if the branch is taken.
  • DeadArgumentElimination - LTO pass to remove arguments to a function if it is always called with the same constants.
  • DeadCodeElimination
  • Directize - Turn an indirect call into a normal call, when the table index is constant.
  • DuplicateFunctionElimination - LTO pass.
  • Inlining - LTO pass.
  • LocalCSE - Simple local common subexpression elimination.
  • LoopInvariantCodeMotion
  • MemoryPacking - Key "optimize data segments" pass that combines segments, removes unneeded parts, etc.
  • MergeBlocks - Merge a block to an outer one where possible, reducing their number.
  • MergeLocals - When two locals have the same value in part of their overlap, pick in a way to help CoalesceLocals do better later (split off from CoalesceLocals to keep the latter simple).
  • MinifyImportsAndExports - Minifies them to “a”, “b”, etc.
  • OptimizeAddedConstants - Optimize a load/store with an added constant into a constant offset.
  • OptimizeInstructions - Key peephole optimization pass with a constantly increasing list of patterns.
  • PickLoadSigns - Adjust whether a load is signed or unsigned in order to avoid sign/unsign operations later.
  • Precompute - Calculates constant expressions at compile time, using the built-in interpreter (which is guaranteed to be able to handle any constant expression).
  • ReReloop - Transforms wasm structured control flow to a CFG and then goes back to structured form using the Relooper algorithm, which may find more optimal shapes.
  • RedundantSetElimination - Removes a local.set of a value that is already present in a local. (Overlaps with CoalesceLocals; this achieves the specific operation just mentioned without all the other work CoalesceLocals does, and therefore is useful in other places in the optimization pipeline.)
  • RemoveUnsedBrs - Key “minor control flow optimizations” pass, including jump threading and various transforms that can get rid of a br or br_table (like turning a block with a br in the middle into an if when possible).
  • RemoveUnusedModuleElements - “Global DCE”, an LTO pass that removes imports, functions, globals, etc., when they are not used.
  • ReorderFunctions - Put more-called functions first, potentially allowing the LEB emitted to call them to be smaller (in a very large program).
  • ReorderLocals - Put more-used locals first, potentially allowing the LEB emitted to use them to be smaller (in a very large function). After the sorting, it also removes locals not used at all.
  • SimplifyGlobals - Optimizes globals in various ways, for example, coalescing them, removing mutability from a global never modified, applying a constant value from an immutable global, etc.
  • SimplifyLocals - Key “local.get/set/tee” optimization pass, doing things like replacing a set and a get with moving the set’s value to the get (and creating a tee) where possible. Also creates block/if/loop return values instead of using a local to pass the value.
  • Vacuum - Key “remove silly unneeded code” pass, doing things like removing an if arm that has no contents, a drop of a constant value with no side effects, a block with a single child, etc.

“LTO” in the above means an optimization is Link Time Optimization-like in that it works across multiple functions, but in a sense Binaryen is always “LTO” as it usually is run on the final linked wasm.

Advanced optimization techniques in the Binaryen optimizer include SSAification, Flat IR, and Stack/Poppy IR.

Binaryen also contains various passes that do other things than optimizations, like legalization for JavaScript, Asyncify, etc.


cmake . && make

A C++14 compiler is required. Note that you can also use ninja as your generator: cmake -G Ninja . && ninja.

Binaryen.js can be built using Emscripten, which can be installed via the SDK).

emcmake cmake . && emmake make binaryen_js

Visual C++

  1. Using the Microsoft Visual Studio Installer, install the "Visual C++ tools for CMake" component.

  2. Generate the projects:

    mkdir build
    cd build
    "%VISUAL_STUDIO_ROOT%\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin\cmake.exe" ..

    Substitute VISUAL_STUDIO_ROOT with the path to your Visual Studio installation. In case you are using the Visual Studio Build Tools, the path will be "C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools".

  3. From the Developer Command Prompt, build the desired projects:

    msbuild binaryen.vcxproj

    CMake generates a project named "ALL_BUILD.vcxproj" for conveniently building all the projects.




bin/wasm-opt [.wasm or .wat file] [options] [passes, see --help] [--help]

The wasm optimizer receives WebAssembly as input, and can run transformation passes on it, as well as print it (before and/or after the transformations). For example, try

bin/wasm-opt test/passes/lower-if-else.wat --print

That will pretty-print out one of the test cases in the test suite. To run a transformation pass on it, try

bin/wasm-opt test/passes/lower-if-else.wat --print --lower-if-else

The lower-if-else pass lowers if-else into a block and a break. You can see the change the transformation causes by comparing the output of the two print commands.

It's easy to add your own transformation passes to the shell, just add .cpp files into src/passes, and rebuild the shell. For example code, take a look at the lower-if-else pass.

Some more notes:

  • See bin/wasm-opt --help for the full list of options and passes.
  • Passing --debug will emit some debugging info.



bin/wasm2js [input.wasm file]

This will print out JavaScript to the console.

For example, try

$ bin/wasm2js test/hello_world.wat

That output contains

 function add(x, y) {
  x = x | 0;
  y = y | 0;
  return x + y | 0 | 0;

as a translation of

 (func $add (; 0 ;) (type $0) (param $x i32) (param $y i32) (result i32)
   (local.get $x)
   (local.get $y)

wasm2js's output is in ES6 module format - basically, it converts a wasm module into an ES6 module (to run on older browsers and Node.js versions you can use Babel etc. to convert it to ES5). Let's look at a full example of calling that hello world wat; first, create the main JS file:

// main.mjs
import { add } from "./hello_world.mjs";
console.log('the sum of 1 and 2 is:', add(1, 2));

The run this (note that you need a new enough Node.js with ES6 module support):

$ bin/wasm2js test/hello_world.wat -o hello_world.mjs
$ node --experimental-modules main.mjs
the sum of 1 and 2 is: 3

Things keep to in mind with wasm2js's output:

  • You should run wasm2js with optimizations for release builds, using -O or another optimization level. That will optimize along the entire pipeline (wasm and JS). It won't do everything a JS minifer would, though, like minify whitespace, so you should still run a normal JS minifer afterwards.
  • It is not possible to match WebAssembly semantics 100% precisely with fast JavaScript code. For example, every load and store may trap, and to make JavaScript do the same we'd need to add checks everywhere, which would be large and slow. Instead, wasm2js assumes loads and stores do not trap, that int/float conversions do not trap, and so forth. There may also be slight differences in corner cases of conversions, like non-trapping float to int.



(or python will run wasm-shell, wasm-opt, etc. on the testcases in test/, and verify their outputs.

The script supports some options:

./ [--interpreter=/path/to/interpreter] [TEST1] [TEST2]..
  • If an interpreter is provided, we run the output through it, checking for parse errors.
  • If tests are provided, we run exactly those. If none are provided, we run them all. To see what tests are available, run ./ --list-suites.
  • Some tests require emcc or nodejs in the path. They will not run if the tool cannot be found, and you'll see a warning.
  • We have tests from upstream in tests/spec, in git submodules. Running ./ should update those.

Setting up dependencies

./third_party/ [mozjs|v8|wabt|all]

(or python third_party/ installs required dependencies like the SpiderMonkey JS shell, the V8 JS shell and WABT in third_party/. Other scripts automatically pick these up when installed.

Run pip3 install -r requirements-dev.txt to get the requirements for the lit tests. Note that you need to have the location pip installs to in your $PATH (on linux, ~/.local/bin).


./scripts/ [--binaryen-bin=build/bin]

(or python scripts/ will run various fuzzing modes on random inputs with random passes until it finds a possible bug. See the wiki page for all the details.

Design Principles

  • Interned strings for names: It's very convenient to have names on nodes, instead of just numeric indices etc. To avoid most of the performance difference between strings and numeric indices, all strings are interned, which means there is a single copy of each string in memory, string comparisons are just a pointer comparison, etc.
  • Allocate in arenas: Based on experience with other optimizing/transformating toolchains, it's not worth the overhead to carefully track memory of individual nodes. Instead, we allocate all elements of a module in an arena, and the entire arena can be freed when the module is no longer needed.


  • Why the weird name for the project?

"Binaryen" is a combination of binary - since WebAssembly is a binary format for the web - and Emscripten - with which it can integrate in order to compile C and C++ all the way to WebAssembly, via asm.js. Binaryen began as Emscripten's WebAssembly processing library (wasm-emscripten).

"Binaryen" is pronounced in the same manner as "Targaryen": bi-NAIR-ee-in. Or something like that? Anyhow, however Targaryen is correctly pronounced, they should rhyme. Aside from pronunciation, the Targaryen house words, "Fire and Blood", have also inspired Binaryen's: "Code and Bugs."

  • Does it compile under Windows and/or Visual Studio?

Yes, it does. Here's a step-by-step tutorial on how to compile it under Windows 10 x64 with with CMake and Visual Studio 2015. However, Visual Studio 2017 may now be required. Help would be appreciated on Windows and OS X as most of the core devs are on Linux.

  • Add appveyor.yml for Windows CI

    Add appveyor.yml for Windows CI

    • Added jobs for MinGW (64 bit) and MSVC (32 and 64 bits) with Release configurations as well as MSVC (64 bit) with Debug configuration.
    • Added badge to README.

    Commit 005929e build status:

    @kripken, to set it up, please login to AppVeyor with your GitHub account and chose WebAssembly org from the drop down (post login). Then add binaryen project from top menu and press build.

    I have also added ctest command (doing nothing atm) which would function if we configure the tests in CMakeLists with ctest runner (aka cross-platform tests,: Examples can be found here:✓

    opened by ghost 42
  • "-s WASM=1 -s USE_PTHREADS=1 "raises questions about Shared content.

    When I compile a xx.c file using -s WASM= 1-s USE_PTHREADS=1. eg. emcc test.c -s WASM=1 -s USE_PTHREADS=1 -o test.html The following files will be generated: pthread-main.js test.html test.html.mem test.js test.wasm Now,run the test.html on Firefox and Google, Firefox will error: TypeError: invalid array type for the operation at initMainThreadBlock Google will error: Uncaught TypeError: [object Uint32Array] is not an integer shared typed array. at () at Object.initMainThreadBlock (test.js:1824) at test.js:5557 But about firefox and Google.I opened the relevant experimental configuration. chrome://flags/ ,I've already set up (1)shared-array-buffer (2)All about webassembly chmore

    I looked at test.js, and the error was here. error

    The mistake made me very embarrassed because it could not go on. Can anyone answer that for me?

    opened by Lachrista 41
  • Add .clang-format

    Add .clang-format

    I picked Chromium style as a base style based on @dschuff's suggestion, but I don't care much about particulars as long as it is consistent. We can also use LLVM or Mozilla as a base style. Let me know your preferences. And I modified ColumnLimit to 160 because a lot of code in this repo have columns longer than the usual limit, 80. (I actually kinda prefer 80 personally though :sweat_smile: ) Any suggestions are welcome.

    opened by aheejin 41
  • Float refactoring and nan comparison change

    Float refactoring and nan comparison change

    Followup to #151.

    This began as just using bit_cast in the Literal.reinterpret methods. But this made some tests break. Investigating, I ended up with the following patch, which also refactors out float and double payload methods.

    This uses only the payload+signbit to compare floats and doubles when they are nans. It fixes the test failures. But is this correct? What are the proper semantics for comparing floats for equality when they are nans?

    Taking into account only the payload seems reasonable given that we now only print the payload for nans. If that's not true, how do those two make sense?

    cc @jfbastien @sunfishcode

    opened by kripken 37
  • Update reference types

    Update reference types

    To align with the current state of the reference types proposal:

    • Removes nullref
    • Removes externref and funcref subtyping
    • A Literal of a nullable reference type can now represent null (previously was type nullref)
    • Updates the tests and comments out those tests relying on subtyping, to be enabled again in follow-ups
    opened by dcodeIO 31
  • binaryen has started to require CMAKE_OSX_DEPLOYMENT_TARGET of 10.14

    binaryen has started to require CMAKE_OSX_DEPLOYMENT_TARGET of 10.14

    When we started using std::variant we inadvertently started requiring a 10.14 macOS deployment target.

    This went unnoticed on the emscripten-releases builder because there we use out own version of libc++.

    However for other users, such as those who want to build from source using emsdk, this is a new requirement and emsdk currently pins CMAKE_OSX_DEPLOYMENT_TARGET to 10.11:

    I guess we either need to remove the requirements of binaryen or relax the requirements of emsdk.

    opened by sbc100 30
  • Misoptimization causing a runtime panic in Rust code

    Misoptimization causing a runtime panic in Rust code

    In the archive - - you will find two files, coreutils.nodbg.wasm and coreutils.nodbg.opt.wasm.

    Unfortunately, they're fairly large, but that's as far as I could get for now. Maybe you'll be able to reduce them further and get to the bottom of the issue with wasm-reduce, but I couldn't spend more time on it.

    Initially I've built the original source code in a release mode, which, combined with wasm-opt --asyncify -Os, introduced an infinite loop in the resulting WebAssembly, and I couldn't figure out what's going on.

    After rebuilding in debug mode, I'm getting a [slightly more helpful] runtime assertion panic somewhere in initialisation of the app (when it tries to allocate data for a HashMap), and that panic is not reproducible by the original app, suggesting that some values were indeed misoptimized.

    My initial assumption was that this is some bug specific to Asyncify, but, by slowly eliminating different options, got down to just -O1.

    coreutils.nodbg.opt.wasm is produced with

    $ wasm-opt.exe coreutils.nodbg.wasm -O1 -g -o coreutils.nodbg.opt.wasm

    using wasm-opt from the latest commit on master (, however I could reproduce the issue with older versions as well - should be easy to do so using the command above.

    To reproduce the issue, install a WASI-compatible runner, e.g. Wasmtime or Wasmer, and invoke it on both Wasm files.

    This is what I'm seeing on the unoptimised file:

    $ wasmtime -g coreutils.nodbg.wasm
    coreutils.nodbg 0.0.1 (multi-call binary)
    Usage: coreutils.nodbg [function [arguments...]]
    Currently defined functions/utilities:
        base32, base64, basename, cat, cksum, comm, cp, cut, date, df,
        dircolors, dirname, echo, env, expand, expr, factor, false, fmt, fold,
        hashsum, head, join, link, ln, ls, md5sum, mkdir, mktemp, more, mv, nl,
        od, paste, printenv, printf, ptx, pwd, readlink, realpath, relpath, rm,
        rmdir, seq, sha1sum, sha224sum, sha256sum, sha3-224sum, sha3-256sum,
        sha3-384sum, sha3-512sum, sha384sum, sha3sum, sha512sum, shake128sum,
        shake256sum, shred, shuf, sleep, sort, split, sum, tac, tail, tee, test,
        tr, true, truncate, tsort, unexpand, uniq, wc, yes

    and the optimised one:

    $ wasmtime -g coreutils.nodbg.opt.wasm
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: LayoutErr { private: () }', C:\Users\rreverser\.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib/rustlib/src/rust\src/libcore/alloc/
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    Error: failed to run main module `coreutils.nodbg.opt.wasm`
    Caused by:
        0: failed to invoke command default
        1: wasm trap: unreachable
           wasm backtrace:
             0: 0x55cdd9 - <unknown>!__rust_start_panic
             1: 0x55bf3e - <unknown>!rust_panic
             2: 0x55bc75 - <unknown>!std::panicking::rust_panic_with_hook::hb8132b4308a71007
             3: 0x55baca - <unknown>!rust_begin_unwind
             4: 0x568654 - <unknown>!core::panicking::panic_fmt::hdd2ab611a748a491
             5: 0x56d5ae - <unknown>!core::option::expect_none_failed::habae0dd01495a6a7
             6: 0x339708 - <unknown>!core::result::Result<T,E>::unwrap::h0b29ccc7c9c10d07
             7: 0x89b8 - <unknown>!core::alloc::layout::Layout::pad_to_align::hae1edfd09d682fcc
             8: 0x5db74 - <unknown>!core::alloc::layout::Layout::array::h630440cfaca600e0
             9: 0x57b73 - <unknown>!hashbrown::raw::calculate_layout::h32a6c9dc2acba081
             10: 0x594fe - <unknown>!hashbrown::raw::RawTable<T>::new_uninitialized::h92067dc36e22c506
             11: 0x59b10 - <unknown>!hashbrown::raw::RawTable<T>::try_with_capacity::h64ecb1c3ee6c48f1
             12: 0x5864c - <unknown>!hashbrown::raw::RawTable<T>::resize::h1c5680148fe8f2d3
             13: 0x58216 - <unknown>!hashbrown::raw::RawTable<T>::reserve_rehash::h36d80382ea8e9ffc
             14: 0x5a96b - <unknown>!hashbrown::raw::RawTable<T>::reserve::h96e5457f885fde23
             15: 0x5a872 - <unknown>!hashbrown::raw::RawTable<T>::insert::hd555411ed0fee9f1
             16: 0x77f8e - <unknown>!hashbrown::map::HashMap<K,V,S>::insert::h263136bf7b5163b1
             17: 0x44e23 - <unknown>!std::collections::hash::map::HashMap<K,V,S>::insert::h659bd993b5cd211f
             18: 0xe449 - <unknown>!coreutils::util_map::h81ccd6dc74a16eac
             19: 0x7d64 - <unknown>!coreutils::main::h791587a0c1680f0e
             20: 0x4cd5c - <unknown>!std::rt::lang_start::{{closure}}::h14ee8eacd196f15d
             21: 0x55aa3b - <unknown>!std::sys_common::backtrace::__rust_begin_short_backtrace::hd2bb8386068f691a
             22: 0x55c02a - <unknown>!std::rt::lang_start_internal::h66de5b0ec01e6d33
             23: 0x4cd33 - <unknown>!std::rt::lang_start::h01445dbe67544740
             24: 0x849f - <unknown>!__original_main
             25: 0x7cad - <unknown>!_start
    opened by RReverser 29
  • Vanilla LLVM testing

    Vanilla LLVM testing

    At this point I think I have all we need in emscripten to support using a vanilla (stock) LLVM build. Now I'm moving on to testing in binaryen, which is what this pull is.

    opened by kripken 29
  • [reference-types] remove single table restriction in IR

    [reference-types] remove single table restriction in IR

    This will remove the single table per module assumption everywhere, and instead introduces a vector of tables, same as functions, globals, etc. Support for parsing and writing multiple tables is added and C/JS APIs have been updated accordingly. A table name argument is also added to call_indirect.

    (Fixes #3512)

    opened by martianboy 28
  • Autogenerate boilerplate code for expressions from a single tool

    Autogenerate boilerplate code for expressions from a single tool

    This proposes a new way of developing Binaryen: Instead of hand-writing the boilerplate for expressions - like the logic for walking, comparing, etc. - we have a single declaration of each expression, and a script that generates the code.

    The goal is to reduce the hand-written boilerplate code, mainly to make it easier to work on Binaryen - reduce the time to add new instructions, or change them - and to reduce the risk for bugs in those places. A minor goal is to also improve the speed of the code.

    This PR implements the declaration and emitting in Python. This can't be done in C++ itself - in theory macros or templates can get close, but we want a lot more power here than macros provide even in other languages (AFAIK - but there is lisp...). Fundamentally we want the power to process the declarations of expressions and to generate something pretty arbitrary from that, using a full programming language. A possible future use of this, for example, is to automatically reorder our instructions in some optimal way, so that it's a single comparison to check if something is control flow structure or not; or to reorder (reorderable) fields on an expression; and more complex things are possible.

    Example of what this PR does: We declare call once,

    class Call(Expression):
        operands = ChildList()
        target = Name()
        isReturn = Bool(init='false')

    This reads as: Call is an expression. It has a field "operands" which is a list of child expressions. It has a field "target" which is a name (the called function). And it has a field "isReturn" which is a bool that has an initial value.

    The declarations are all ported from the existing header, with initial values and comments and so forth all copied over, so this should be NFC (hopefully!)

    We then run the tool and it autogenerates the C++ header:

    class Call : public SpecificExpression<Expression::CallId> {
      Call(MixedArena& allocator) : operands(allocator) {}
      ExpressionList operands;
      Name target;
      bool isReturn = false;
      void finalize();

    And also it autogenerates C++ code to compare calls:

    case Expression::CallId: {
      auto* castLeft = left->cast<Call>();
      auto* castRight = right->cast<Call>();
      if (castLeft->operands.size() != castRight->operands.size()) {
        return false;
      for (auto* child : castLeft->operands) {
      for (auto* child : castRight->operands) {
      if (castLeft->target != castRight->target) {
        return false;
      if (castLeft->isReturn != castRight->isReturn) {
        return false;

    That code shows how this can prevent bugs: the tool won't forget a field or anything like that.

    This PR is just a proof of concept for discussion. It emits just the two things just shown: the main header declaration of the class, and the logic to compare things. If we like this, the goal will be to automate all the other boilerplate as well - walking, hashing, even the C and JS APIs.

    Reading the Python, it's not perfect: f-strings would be nice (f'{foo}' will replace {foo} with the local var foo), but you need to escape curly braces, and we are emitting C++ here... so I avoid f-strings in some places. In theory JavaScript could be better here, as the formatting there doesn't have the issue. I don't have a strong feeling myself between the two. Benefits of python include that it's already used in Binaryen, and we can import and reuse a little code from scripts/shared, but it's not major.

    This PR passes the test suite as well as fuzzing and emscripten tests, so it looks stable. As mentioned earlier speed is not a major goal here, but still, the comparison code is faster as it avoids the stack of immediates we had before (perf stat reports 9% fewer instructions are run, but the wall time is not much changed, likely because my test was memory bandwidth bound).

    This integrates with the existing CMake build system, running the tool automatically at the start. It's pretty fast, and it only writes out the output if there are changes, so it doesn't impact build times that I can tell.

    Process-wise, the generated C++ is all checked in. It's also pretty readable (the tool runs clang-format on it for you). So it should be ok in terms of debugging, for example - no weird stack trace issues like with macros. You generally don't need to care about how those files were generated, unless you are adding a new instruction (and then hopefully it saves you a lot of time!).

    Very interested to hear feedback here! Overall, I think I've convinced myself at least that this is a good idea. While the script code for a single thing is less readable than the emitted C++ code for it, you do write that script code once - and not once per instruction. So it's one slightly worse task rather than many slightly better ones.

    One thing I am not happy with in the current code is how Method()s are declared. Open to suggestions there.

    opened by kripken 27
  • More optimizations for pow of two and pos/neg one const on the right

    More optimizations for pow of two and pos/neg one const on the right

    • introduce optimizePowerOf2UDiv which complete optimizePowerOf2URem / optimizePowerOf2Mul set.
    • optimizePowerOf2UDiv and friends now define as template functions for handling i32 and i64 types.
    • more simplifications for binary ops with negative one constant for rhs:
      • (integer)x * -1 ==> 0 - x
      • (signed)x % -1 ==> 0
      • (signed)x % 1 ==> 0
      • (uint32_t)x / -1 ==> x != -1
      • (unsigned)x > -1 ==> 0
      • (unsigned)x < -1 ==> x != -1
      • (unsigned)x <= -1 ==> 1
      • ~~(signed)x <= -1 ==> (unsigned)x >> sizeof(bits) - 1~~
    opened by MaxGraey 26
  • wasm2js: Avoid emitting non-JS code during opt

    wasm2js: Avoid emitting non-JS code during opt

    As noted in #4806, trying to optimize past level 0 can result in passes emitting non-JS code, which is then unable to be converted during final output.

    This commit creates a new targetJs option in PassOptions, which can be checked inside each pass where non-JS code might be emitted.

    This commit initially adds that logic to OptimizeInstructions, where this issue was first noticed.

    opened by willcohen 0
  • Moving assert in MultiMemoryLowering

    Moving assert in MultiMemoryLowering

    Moving the assert that checks whether the DataSegment *offset is const to only be necessary when the DataSegment belongs to a memory other than the first. DataSegment *offset does not need to be adjusted for the first memory.

    opened by ashleynh 0
  • [Parser] Parse data segments

    [Parser] Parse data segments

    Parse active and passive data segments, including all their variations and abbreviations as well as data segments declared inline in memory declarations. Switch to parsing data strings, memory limits, and memory types during the ParseDecls phase so that the inline data segments can be completely parsed during that phase and never revisited. Parsing the inline data segments in a later phase would not work because they would be incorrectly inserted at the end of the data segment index space.

    Also update the printer to print a memory use on active data segments that are initialized in a non-default memory.

    opened by tlively 1
  • [NFC][Parser] Track definition indices

    [NFC][Parser] Track definition indices

    For each definition in a module, record that definition's index in the relevant index space. Previously the index was inferred from its position in a list of module definitions, but that scheme does not scale to data segments defined inline inside memory definitions because these data segments occupy a slot in the data segment index space but do not have their own independent definitions.

    opened by tlively 1
  • Why multi same type will be set to type section after invoking `TypeBuilderSetStructType` several times?

    Why multi same type will be set to type section after invoking `TypeBuilderSetStructType` several times?

    I need to use customed struct type, so I used the TypeBuilderCreate function, TypeBuilderSetStructType function and TypeBuilderBuildAndDispose function to wrap a createStructType function. Just like the example below:

    For multi objects which has same struct structure, I may call the createStructType function several times, which will create the same type. However, these same types will be reflected in the type section of the generated wat file (like (type ${mut:ref?|{}|_mut:f64} (struct (field (mut (ref null ${}))) (field (mut f64))))):

     (type ${mut:ref?|{}|_mut:f64} (struct (field (mut (ref null ${}))) (field (mut f64))))
     (type ${mut:ref?|{}|_mut:f64} (struct (field (mut (ref null ${}))) (field (mut f64))))
     (type $none_=>_none (func))
     (type ${mut:ref?|{}|_mut:f64} (struct (field (mut (ref null ${}))) (field (mut f64))))
     (func $objTest

    When I use wasm-as to convert the wat file to wasm file, an error will be reported [parse exception: duplicate function type (at 3:1)]Fatal: error in parsing input.

    Is there any way to avoid generating the same type in the type section when multiple calls to createStructType are required?

    opened by yviansu 6
Development of WebAssembly and associated infrastructure
Wasmcraft a compiler from WebAssembly to Minecraft Java Edition datapacks

Wasmcraft is a compiler from WebAssembly to Minecraft Java Edition datapacks. Since WebAssembly is a well-supported target for many languages, this means that you can run code written in e.g. C in Minecraft.

null 64 Dec 23, 2022
⚙️ Experimental JVM bytecode to WebAssembly compiler

⚙️ montera Final year university project: a highly experimental JVM bytecode to WebAssembly compiler ⚠️ Do NOT use this for serious projects yet! It's

MrBBot 7 Oct 24, 2022
Zaplib is an open-source library for speeding up web applications using Rust and WebAssembly.

⚡ Zaplib Zaplib is an open-source library for speeding up web applications using Rust and WebAssembly. It lets you write high-performance code in Rust

Zaplib 1.2k Jan 5, 2023
A simple Rust and WebAssembly real-time implementation of the Vigénere Cipher utilizing the Sycamore reactive library.

WebAssembly Vigenère Cipher A simple Rust and WebAssembly real-time implementation of the Vigenère Cipher utilizing the Sycamore reactive library, Tru

Rodrigo Santiago 6 Oct 11, 2022
Client for integrating private analytics in fast and reliable libraries and apps using Rust and WebAssembly

TelemetryDeck Client Client for integrating private analytics in fast and reliable libraries and apps using Rust and WebAssembly The library provides

Konstantin 2 Apr 20, 2022
Webassembly binding for Hora Approximate Nearest Neighbor Search Library

hora-wasm [Homepage] [Document] [Examples] [Hora] Javascript bidding for the Hora Approximate Nearest Neighbor Search, in WebAssembly way. Features Pe

Hora-Search 26 Sep 23, 2022
A simple event-driven library for parsing WebAssembly binary files

The WebAssembly binary file decoder in Rust A Bytecode Alliance project The decoder library provides lightweight and fast decoding/parsing of WebAssem

Yury Delendik 8 Jul 27, 2022
A console and web-based Gomoku written in Rust and WebAssembly

?? rust-gomoku A console and web-based Gomoku written in Rust and WebAssembly Getting started with cargo & npm Install required program, run # install

namkyu1999 2 Jan 4, 2022
darkforest is a console and web-based Roguelike written in Rust and WebAssembly.

darkforest darkforest is a console and web-based Roguelike written in Rust and WebAssembly. Key Features TBA Quick Start TBA How To Contribute Contrib

Chris Ohk 5 Oct 5, 2021
Lumen - A new compiler and runtime for BEAM languages

An alternative BEAM implementation, designed for WebAssembly

Lumen 3.1k Dec 26, 2022
🚀Wasmer is a fast and secure WebAssembly runtime that enables super lightweight containers to run anywhere

Wasmer is a fast and secure WebAssembly runtime that enables super lightweight containers to run anywhere: from Desktop to the Cloud, Edge and IoT devices.

Wasmer 14.1k Jan 8, 2023
Simple file sharing with client-side encryption, powered by Rust and WebAssembly

Hako Simple file sharing with client-side encryption, powered by Rust and WebAssembly Not feature-packed, but basic functionalities are just working.

Jaehyeon Park 30 Nov 25, 2022
A handy calculator, based on Rust and WebAssembly.

qubit ?? Visit Website To Use Calculator Example ?? Visit Website To Use Calculator 2 + 2

Abhimanyu Sharma 55 Dec 26, 2022
Rust WebGL2 wrapper with a focus on making high-performance WebAssembly graphics code easier to write and maintain

Limelight Limelight is a WebGL2 wrapper with a focus on making high-performance WebAssembly graphics code easier to write and maintain. live

drifting in space 27 Dec 30, 2022
A template for kick starting a Rust and WebAssembly project using wasm-pack.

A template for kick starting a Rust and WebAssembly project using wasm-pack.

Haoxi Tan 1 Feb 14, 2022
Rust-based WebAssembly bindings to read and write Apache Parquet files

parquet-wasm WebAssembly bindings to read and write the Parquet format to Apache Arrow. This is designed to be used alongside a JavaScript Arrow imple

Kyle Barron 103 Dec 25, 2022
A prototype WebAssembly linker using module linking.

WebAssembly Module Linker Please note: this is an experimental project. wasmlink is a prototype WebAssembly module linker that can link together a mod

Peter Huene 19 Oct 28, 2022
Sealed boxes implementation for Rust/WebAssembly.

Sealed boxes for Rust/WebAssembly This Rust crate provides libsodium sealed boxes for WebAssembly. Usage: // Recipient: create a new key pair let reci

Frank Denis 16 Aug 28, 2022
WebAssembly on Rust is a bright future in making application runs at the Edge or on the Serverless technologies.

WebAssembly Tour WebAssembly on Rust is a bright future in making application runs at the Edge or on the Serverless technologies. We spend a lot of ti

Thang Chung 129 Dec 28, 2022