Lumen - A new compiler and runtime for BEAM languages

Overview

Lumen - A new compiler and runtime for BEAM languages

Machine Vendor Operating System Host Subgroup Status
wasm32 unknown unknown macOS N/A wasm32-unknown-unknown (macOS)
wasm32 unknown unknown Linux N/A wasm32-unknown-unknown (Linux)
x86_64 apple darwin macOS compiler x86_64-apple-darwin compiler
x86_64 apple darwin macOS libraries x86_64-apple-darwin Libraries
x86_64 apple darwin macOS lumen/otp x86_64-apple-darwin lumen/otp
x86_64 apple darwin macOS runtime full x86_64-apple-darwin Runtime Full
x86_64 unknown linux-gnu Linux libraries x86_64-unknown-linux-gnu Libraries
x86_64 unknown linux-gnu Linux lumen/otp x86_64-unknown-linux-gnu lumen/otp
x86_64 unknown linux-gnu Linux runtime full x86_64-unknown-linux-gnu Runtime Full

Contributing

In order to build Lumen, or make changes to it, you'll need the following installed:

Tools

First, you will need to install rustup. Follow the instructions at that link.

Once you have installed rustup, you will need to install the nightly version of Rust (currently our CI builds against the 2021-01-29 nightly). We require nightly due to a large number of nightly features we use, as well as some dependencies for the WebAssembly targets that we make use of.

# to use the latest nightly
rustup default nightly

# or, in case of issues, install the 2021-01-29 nightly to match our CI
rustup default nightly-2021-01-29

In order to run various build tasks in the project, you'll need the cargo-make plugin for Cargo. You can install it with:

cargo install cargo-make

You can see what tasks are available with cargo make --print-steps.

You may also want to install the following tools for editor support (rustfmt will be required on all pull requests!):

rustup component add rls rustfmt clippy

Next, you will need to install the wasm32 targets for the toolchain:

rustup target add wasm32-unknown-unknown --toolchain 

LLVM

LLVM (with our modifications) is used by Lumen's code generation backend. It is needed to build the compiler. Typically, you'd need to build this yourself, which we have instructions for below; but we also provide prebuilt distributions that have everything needed.

Installing Prebuilt Distributions (Recommended)
Linux

The instructions below reference $XDG_DATA_HOME as an environment variable, it is recommended to export XDG variables in general, but if you have not, just replace the usages of $XDG_DATA_HOME below with $HOME/.local/share, which is the usual default for this XDG variable.

mkdir -p $XDG_DATA_HOME/llvm/
cd $XDG_DATA_HOME/llvm/
wget https://github.com/lumen/llvm-project/releases/download/lumen-12.0.0-dev_2020-10-22/clang+llvm-12.0.0-x86_64-linux-gnu.tar.gz
tar -xz --strip-components 1 -f clang+llvm-12.0.0-x86_64-linux-gnu.tar.gz
rm clang+llvm-12.0.0-x86_64-linux-gnu.tar.gz
cd -
MacOS
mkdir -p $XDG_DATA_HOME/llvm/
cd $XDG_DATA_HOME/llvm/
wget https://github.com/lumen/llvm-project/releases/download/lumen-12.0.0-dev_2020-10-22/clang+llvm-12.0.0-x86_64-apple-darwin19.5.0.tar.gz
tar -xzf clang+llvm-12.0.0-x86_64-apple-darwin19.5.0.tar.gz
rm clang+llvm-12.0.0-x86_64-apple-darwin19.5.0.tar.gz
mv clang+llvm-12.0.0-x86_64-apple-darwin19.5.0 lumen
cd -
Other

We don't yet provide prebuilt packages for other operating systems, you'll need to build from source following the directions below.

Building From Source

LLVM requires CMake, a C/C++ compiler, and Python. It is highly recommended that you also install Ninja and CCache to make the build significantly faster, especially on subsequent rebuilds. You can find all of these dependencies in your system package manager, including Homebrew on macOS.

We have the build more or less fully automated, just three simple steps:

git clone https://github.com/lumen/llvm-project
cd llvm-project
make llvm-shared

This will install LLVM to $XDG_DATA_HOME/llvm/lumen, or $HOME/.local/share/llvm/lumen, if $XDG_DATA_HOME is not set. It assumes that Ninja and CCache are installed, but you can customize the llvm target in the Makefile to use make instead by removing -G Ninja from the invocation of cmake, likewise you can change the setting to use CCache by removing that option as well.

NOTE: Building LLVM the first time will take a long time, so grab a coffee, smoke 'em if you got 'em, etc.

Building Lumen

Once LLVM is installed/built, you can build the lumen executable:

LLVM_PREFIX=$XDG_DATA_HOME/llvm/lumen cargo make

This will create the compiler executable and associated toolchain for the host machine under bin in the root of the project. You can invoke lumen via the symlink bin/lumen, e.g.:

bin/lumen --help

You can compile an Erlang file to an executable (on x86_64 only, currently):

bin/lumen compile --output-dir _build  [..]

This will produce an executable with the same name as the source file in the current working directory with the .out or .exe extension, depending on your platform.

NOTE: The compiler/runtime are still in experimental stages, so stability is not guaranteed, and you may need to provide additional compiler flags if the linker warns about missing symbols, e.g. -lpthread.

Project Structure

Lumen is currently divided into a few major components:

  • Compiler
  • Interpreter
  • Runtime(s)

Lumen's frontend and diagnostics libraries were moved into the EIR Project, which includes both the Erlang parser and the high-level intermediate representation EIR, short for Erlang Intermediate Representation. Lumen depends on the EIR libraries for those components.

Compiler

The Lumen compiler is composed of the following sub-libraries/components:

  • liblumen_target, contains target platform metadata and configuration
  • liblumen_session, contains state and configuration for a single instantiation of the compiler, or "session". This is where you can find the bulk of option processing, input/output generation, and related items.
  • liblumen_compiler, contains the core of the compiler driver and incremental compilation engine (built on salsa), as well as all of the higher level queries for generating artifacts from parsed sources.
  • liblumen_codegen, contains the code generation backend, which is divided into two primary phases: the first handles translation from EIR to our own dialect of MLIR (or, in some cases, LLVM IR directly). This translation mostly aims to preserve the level of abstraction found in EIR, while preparing for conversion to LLVM IR. The second phase is conversion of our MLIR dialect to LLVM, which is where the bulk of the codegen work occurs.
  • liblumen_term, contains the essential parts of our term encoding scheme, and is shared with the runtime libraries. The compiler requires this in order to handle encoding constant terms during compilation.

Runtime(s)

The runtime is broken up into multiple libraries:

  • liblumen_core, contains the essential APIs for interacting with the system, performing allocations, as well as various common types used throughout Lumen.
  • liblumen_alloc, contains the bulk of the Erlang Runtime System core data types and APIs
  • liblumen_crt, acts as the core runtime entry point for executables, handles bootstrapping the runtime system. This is linked in to all compiler-generated executables
  • lumen_rt_core, (wip) the core runtime library used across all target-specific runtimes
  • lumen_rt_minimal (wip) an experimental runtime library built on top of lumen_rt_core, designed for x86_64 platforms. Currently used as the runtime for executables generated by the compiler.
  • lumen_web, original WebAssembly runtime, builds on lumen_rt_full
  • lumen_rt_full, original runtime library for all targets. This is slowly being broken up into smaller pieces, either merged into lumen_rt_core, or new more target-specific runtime crates. Currently used by the interpreter, and contains all of the BIF functions implemented so far.

The above collection of libraries correspond to ERTS in the BEAM virtual machine.

Making Changes

Before making any major changes, please open an issue tagged "RFC" with the problem you need to solve, your proposed solution, and any outstanding questions you have in terms of implementation. The core team (and you) will use that issue to talk through the changes and either green light the proposal, or request changes. In some cases, a proposal may request changes that are either incompatible with the project's goals, or impose too high of a maintenance or complexity burden, and will be turned down. The importance of having the RFC discussion first is that it prevents someone from doing a bunch of work that will ultimately not be upstreamed, and allows the core team or the community to provide feedback that may make the work simpler, or better in the end.

For smaller changes/bug fixes, feel free to open an issue first if you are new to the project and want some guidance on working through the fix. Otherwise, it is acceptable to just open a PR directly with your fix, and let the review happen there.

Always feel free to open issues for bugs, and even perceived issues or questions, as they can be a useful resource for others; but please do make sure to use the search function to avoid duplication!

If you plan to participate in discussions, or contribute to the project, be aware that this project will not tolerate abuse of any kind against other members of the community; if you feel that someone is being abusive or inappropriate, please contact one of the core team members directly (or all of us). We want to foster an environment where people both new and experienced feel welcomed, can have their questions answered, and hopefully work together to make this project better!

About Lumen

Lumen is not only a compiler, but a runtime as well. It consists of two parts:

  • A compiler for Erlang to native code for a given target (x86, ARM, WebAssembly)
  • An Erlang runtime, implemented in Rust, which provides the core functionality needed to implement OTP

The primary motivator for Lumen's development was the ability to compile Elixir applications that could target WebAssembly, enabling use of Elixir as a language for frontend development. It is also possible to use Lumen to target other platforms as well, by producing self-contained executables on platforms such as x86.

Lumen is different than BEAM in the following ways:

  • It is an ahead-of-time compiler, rather than a virtual machine that operates on bytecode
  • It has some additional restrictions to allow more powerful optimizations to take place, in particular hot code reloading is not supported
  • The runtime library provided by Lumen is written in Rust, and while very similar, differs in mostly transparent ways. One of the goals is to provide a better foundation for learning how the runtime is implemented, and to take advantage of Rust's more powerful static analysis to catch bugs early.
  • It has support for targeting WebAssembly, as well as other targets.

The result of compiling a BEAM application via Lumen is a static executable. This differs significantly from how deployment on the BEAM works today (i.e. via OTP releases). While we sacrifice the ability to perform hot upgrades/downgrades, we make huge gains in cross-platform compatibility, and ease of use. Simply drop the executable on a compatible platform, and run it, no tools required, or special considerations during builds. This works the same way that building Rust or Go applications works today.

Goals

  • Support WebAssembly as a build target
  • Produce easy-to-deploy static executables as build artifacts
  • Integrate with tooling provided by BEAM languages
  • More efficient execution by removing the need for an interpreter at runtime
  • Feature parity with mainline OTP (with exception of the non-goals listed below)

Non-Goals

  • Support for hot upgrades/downgrades
  • Support for dynamic code loading

Lumen is an alternative implementation of Erlang/OTP, so as a result it is not as battle tested, or necessarily as performant as the BEAM itself. Until we have a chance to run some benchmarks, it is hard to know what the difference between the two in terms of performance actually is.

Lumen is not intended to replace BEAM at this point in time. At a minimum, the stated non-goals of this project mean that for at least some percentage of projects, some required functionality would be missing. However, it is meant to be a drop-in replacement for applications which are better served by its feature set.

Architecture

Compiler

The compiler frontend accepts Erlang source files. This is parsed into an abstract syntax tree, lowered into EIR (Erlang Intermediate Representation), then finally lowered to LLVM IR where codegen is performed.

Internally, the compiler represents Erlang/Elixir code in a form very similar to continuation-passing style. Continuations are a powerful construct that enable straightforward implementations of non-local returns/exceptions, green threading, and more. Optimizations are primarily performed on this representation, prior to lowering to LLVM IR. See eirproject/eir for more information on the compiler frontend and EIR itself.

During lowering to LLVM IR, the continuation representation is stripped away, and platform-specific methods for implementing various constructs are generated. For example, on x86_64, hand-written assembly is used to perform extremely cheap stack switching by the scheduler, and to provide dynamic function application facilities for the implementation of apply. Currently, the C++-style zero-cost exception model is used for implementing exceptions. There are some future proposals in progress for WebAssembly that may allow us to use continuations for exceptions, but that is not yet stabilized or implemented in browsers.

The compiler produces object files, and handles linking objects together into an executable. It can also dump all of the intermediate artifacts, such as the AST, EIR, MLIR in various forms, LLVM IR, LLVM bitcode, and plain assembly.

Runtime

The runtime design is mostly the same as OTP, but we are not running an interpreter, instead the code is ahead-of-time compiled:

  • The entry point sets up the environment, and starts the scheduler
  • The scheduler is composed of one scheduler per thread
  • Each scheduler can steal work from other schedulers if it is short on work
  • Processes are spawned on the same scheduler as the process they are spawned from, but a scheduler is able to steal them away to load balance
  • I/O is asynchronous, with dedicated threads and an event loop for dispatch

The initial version will be quite spartan, but this is so we can focus on getting the runtime behavior rock solid before we circle back to add in more capabilities.

NIFs

NIFs will be able to be defined in any language with C FFI, and will need to be compiled to object files and then passed via linker flags to the compiler. The compiler will then ensure that the NIFs are linked into the executable.

The design of the FFI is still up in the air - we will likely have a compatibility layer which will mimic the existing erl_nif.h interface, but since the runtime is different, there may be opportunities to provide more direct hooks to parts of the system.

License

Apache 2.0

Comments
  • Compilation fails with SIGSEGV on Windows WSL2 (Ubuntu)

    Compilation fails with SIGSEGV on Windows WSL2 (Ubuntu)

    A release! Congratulations!

    I gave it a test on my computer which is running Ubuntu in WSL2 on Windows 10.

    Here's the result:

    % init.erl
    -module(init).
    -export([start/0]).
    
    start() ->
        erlang:display(hello_lumen).
    
    $ lumen compile
       Compiling /home/louis/src/lpil/learning/erlang/lumen_test/init.erl
    fish: 'lumen compile' terminated by signal SIGSEGV (Address boundary error)
    

    Let me know if I can help in future with debugging + testing. Cheers

    bug compiler target:x86_64-unknown-linux-gnu target:x86_64-unknown-linux-musl 
    opened by lpil 13
  • operand #N does not dominate this use

    operand #N does not dominate this use

    init.erl

    -module(init).
    -export([start/0]).
    -import(erlang, [demonitor/1, display/1, process_info/2, spawn_monitor/1]).
    
    start() ->
      {ChildPid, MonitorReference} = spawn_monitor(fun () ->
        ok
      end),
      wait(2),
      Message = {'DOWN', MonitorReference, process, ChildPid, normal},
      display(has_message(Message)),
      display(demonitor(MonitorReference)),
      display(has_message(Message)).
    
    has_message(Message) ->
      Messages = process_info(self(), messages),
      lists:member(Message, Messages).
    
    wait(Milliseconds) ->
      receive
      after Milliseconds -> ok
      end,
      ok.
    

    Error

    Compiling tests/lib/erlang/demonitor_1/with_reference/with_monitor/does_not_flush_existing_message/init.erl
    loc("tests/lib/erlang/demonitor_1/with_reference/with_monitor/does_not_flush_existing_message/init.erl":11:11): error: operand #0 does not dominate this use
    
    
    module @init {
      eir.func @lumen_eh_personality() -> i32 attributes {std.varargs = true}
      eir.func @"init:start-fun-0-0/0"() -> !eir.term attributes {personality = @lumen_eh_personality} {
        %0 = eir.constant.atom #eir.atom<{ id = 71, value = 'ok' }>
        %1 = eir.cast %0 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        eir.return %1 : !eir.term
      }
      eir.func @"init:start/0"() -> !eir.term attributes {personality = @lumen_eh_personality} {
        %0 = eir.closure @"init:start-fun-0-0/0"() {arity = 0 : i8, env_len = 0 : i8, index = 0 : i32, module = #eir.atom<{ id = 62, value = 'init' }>, old_unique = 0 : i32, unique = "\FD\D4\0CF\BB$\CEg\00\00\00\00\00\00\00\00"} : () -> !eir.term
        %1 = eir.call @"erlang:spawn_monitor/1"(%0) : (!eir.term) -> !eir.term
        eir.br ^bb11(%1 : !eir.term)
      ^bb1(%2: !eir.term):  // pred: ^bb9
        %3 = eir.call @"erlang:display/1"(%2) : (!eir.term) -> !eir.term
        eir.br ^bb2(%3 : !eir.term)
      ^bb2(%4: !eir.term):  // pred: ^bb1
        %5 = eir.call @"erlang:demonitor/1"(%31) : (!eir.term) -> !eir.term
        eir.br ^bb3(%5 : !eir.term)
      ^bb3(%6: !eir.term):  // pred: ^bb2
        %7 = eir.call @"erlang:display/1"(%6) : (!eir.term) -> !eir.term
        eir.br ^bb4(%7 : !eir.term)
      ^bb4(%8: !eir.term):  // pred: ^bb3
        %9 = eir.constant.atom #eir.atom<{ id = 74, value = 'DOWN' }>
        %10 = eir.cast %9 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        %11 = eir.constant.atom #eir.atom<{ id = 75, value = 'process' }>
        %12 = eir.cast %11 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        %13 = eir.constant.atom #eir.atom<{ id = 76, value = 'normal' }>
        %14 = eir.cast %13 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        %15 = eir.tuple(%10, %31, %12, %30, %14) {alloca = false} : (!eir.term, !eir.term, !eir.term, !eir.term, !eir.term) -> !eir<"box<!eir<"tuple<5x!eir.term">>">
        %16 = eir.cast %15 : !eir<"box<!eir<"tuple<5x!eir.term">>"> to !eir.term {from = !eir<"box<!eir<"tuple<5x!eir.term">>">, to = !eir.term}
        %17 = eir.call @"init:has_message/1"(%16) : (!eir.term) -> !eir.term
        eir.br ^bb5(%17 : !eir.term)
      ^bb5(%18: !eir.term):  // pred: ^bb4
        %19 = eir.call @"erlang:display/1"(%18) : (!eir.term) -> !eir.term
        eir.return %19 : !eir.term
      ^bb6(%20: !eir.term):  // pred: ^bb7
        %21 = eir.constant.atom #eir.atom<{ id = 46, value = 'error' }>
        %22 = eir.cast %21 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        %23 = eir.constant.atom #eir.atom<{ id = 200, value = 'badmatch' }>
        %24 = eir.cast %23 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        %25 = eir.tuple(%24, %35) {alloca = false} : (!eir.term, !eir.term) -> !eir<"box<!eir<"tuple<2x!eir.term">>">
        %26 = eir.cast %25 : !eir<"box<!eir<"tuple<2x!eir.term">>"> to !eir.term {from = !eir<"box<!eir<"tuple<2x!eir.term">>">, to = !eir.term}
        eir.throw(%22, %26, %20) : (!eir.term, !eir.term, !eir.term)
      ^bb7:  // pred: ^bb8
        %27 = eir.trace_capture : !eir.term
        eir.br ^bb6(%27 : !eir.term)
      ^bb8:  // pred: ^bb13
        eir.br ^bb7
      ^bb9(%28: !eir.term):  // pred: ^bb10
        %29 = eir.call @"init:has_message/1"(%16) : (!eir.term) -> !eir.term
        eir.br ^bb1(%29 : !eir.term)
      ^bb10(%30: !eir.term, %31: !eir.term):  // pred: ^bb12
        %32 = eir.constant.int #eir.int<{ value = 2 }>
        %33 = eir.cast %32 : !eir.fixnum to !eir.term {from = !eir.fixnum, to = !eir.term}
        %34 = eir.call @"init:wait/1"(%33) : (!eir.term) -> !eir.term
        eir.br ^bb9(%34 : !eir.term)
      ^bb11(%35: !eir.term):  // pred: ^bb0
        %36 = eir.is_type(%35) {type = !eir<"box<!eir<"tuple<2x!eir.term">>">} : (!eir.term) -> i1
        eir.cond_br %36 : i1, ^bb12, ^bb13(%35 : !eir.term)
      ^bb12:  // pred: ^bb11
        %37 = eir.cast %35 : !eir.term to !eir<"box<!eir<"tuple<2x!eir.term">>"> {from = !eir.term, to = !eir<"box<!eir<"tuple<2x!eir.term">>">}
        %38 = eir.getelementptr %37[] {element = !eir.term, index = 1 : index, pointee = !eir<"tuple<2x!eir.term">} : (!eir<"box<!eir<"tuple<2x!eir.term">>">) -> !eir.ref<!eir.term>
        %39 = eir.load(%38) : (!eir.ref<!eir.term>) -> !eir.term
        %40 = eir.getelementptr %37[] {element = !eir.term, index = 2 : index, pointee = !eir<"tuple<2x!eir.term">} : (!eir<"box<!eir<"tuple<2x!eir.term">>">) -> !eir.ref<!eir.term>
        %41 = eir.load(%40) : (!eir.ref<!eir.term>) -> !eir.term
        eir.br ^bb10(%39, %41 : !eir.term, !eir.term)
      ^bb13(%42: !eir.term):  // pred: ^bb11
        eir.br ^bb8
      }
      eir.func @"init:wait/1"(%arg0: !eir.term) -> !eir.term attributes {personality = @lumen_eh_personality} {
        %0 = eir.receive.start %arg0 : (!eir.term) -> !eir.receive_ref
        br ^bb4(%0 : !eir.receive_ref)
      ^bb1:  // pred: ^bb4
        %1 = eir.receive.message %3 : (!eir.receive_ref) -> !eir.term
        br ^bb5(%1 : !eir.term)
      ^bb2:  // pred: ^bb4
        %c3_i8 = constant 3 : i8
        %2 = cmpi "eq", %4, %c3_i8 : i8
        cond_br %2, ^bb6, ^bb3
      ^bb3:  // pred: ^bb2
        eir.unreachable
      ^bb4(%3: !eir.receive_ref):  // pred: ^bb0
        %4 = eir.receive.wait %3 : (!eir.receive_ref) -> i8
        %c2_i8 = constant 2 : i8
        %5 = cmpi "eq", %4, %c2_i8 : i8
        cond_br %5, ^bb1, ^bb2
      ^bb5(%6: !eir.term):  // pred: ^bb1
        %7 = eir.constant.atom #eir.atom<{ id = 71, value = 'ok' }>
        %8 = eir.cast %7 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        eir.return %8 : !eir.term
      ^bb6:  // pred: ^bb2
        %9 = eir.constant.atom #eir.atom<{ id = 71, value = 'ok' }>
        %10 = eir.cast %9 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        eir.return %10 : !eir.term
      }
      eir.func @"init:has_message/1"(%arg0: !eir.term) -> !eir.term attributes {personality = @lumen_eh_personality} {
        %0 = eir.call @"erlang:self/0"() : () -> !eir.term
        eir.br ^bb1(%0 : !eir.term)
      ^bb1(%1: !eir.term):  // pred: ^bb0
        %2 = eir.constant.atom #eir.atom<{ id = 80, value = 'messages' }>
        %3 = eir.cast %2 : !eir.atom to !eir.term {from = !eir.atom, to = !eir.term}
        %4 = eir.call @"erlang:process_info/2"(%1, %3) : (!eir.term, !eir.term) -> !eir.term
        eir.br ^bb2(%4 : !eir.term)
      ^bb2(%5: !eir.term):  // pred: ^bb1
        %6 = eir.call @"lists:member/2"(%arg0, %5) : (!eir.term, !eir.term) -> !eir.term
        eir.return %6 : !eir.term
      }
    }loc("tests/lib/erlang/demonitor_1/with_reference/with_monitor/does_not_flush_existing_message/init.erl":1:1): error: module verification error
    error: unexpected error occurred when lowering EIR module
    
    bug compiler 
    opened by KronicDeth 11
  • Segfault compiling hello world on Linux Mint 19

    Segfault compiling hello world on Linux Mint 19

    Howdy! Really happy to see an early version of the compiler out there for us to play with!

    I just tried to compile an Erlang hello world and got a segfault. Here's the code:

    -module(init).
    -export([start/0]).
    
    start() ->
        erlang:display(hello_world).
    

    Here's my system info:

    OS: Linux Mint 19.3 Cinnamon
    Linux kernel: 4.15.0-62-generic
    Procesor: Intel Core i7-8550U
    

    And here's my output:

    $ /opt/lumen/bin/lumen compile -v ./hello_world.erl
       Compiling ./hello_world.erl
    [1]    6558 segmentation fault (core dumped)  /opt/lumen/bin/lumen compile -v ./hello_world.erl
    

    Any other debug logs or something I can generate to help provide more info?

    bug target:x86_64-unknown-linux-gnu duplicate 
    opened by devonestes 7
  • Building example fails with unresolved dlmalloc import

    Building example fails with unresolved dlmalloc import

    Attempting to wasm-pack build the interpreter-in-browser example errors with the current rust nightly. It appears core::alloc::Alloc no longer exists?

    Error:

    error[E0432]: unresolved import `core::alloc::Alloc`
      --> /Users/ben/.cargo/registry/src/github.com-1ecc6299db9ec823/dlmalloc-0.1.3/src/lib.rs:21:19
       |
    21 | use core::alloc::{Alloc, Layout, AllocErr};
       |                   ^^^^^ no `Alloc` in `alloc`
    
    error: aborting due to previous error
    
    For more information about this error, try `rustc --explain E0432`.
    error: could not compile `dlmalloc`.
    

    Falling back to an older rust nightly seemed to resolve this rustup install nightly-2020-01-02 -> rustup default nightly-2020-01-02

    Very new to Rust so apologies if I'm missing something fundamental.

    bug 
    opened by blambillotte 7
  • Discussion: Integrating Codegen and Runtime

    Discussion: Integrating Codegen and Runtime

    As I mentioned in the call today, I'd like to resolve some outstanding items needed to link up generated code with the runtime.

    Items Needed

    • [x] Lumen installer generates toolchain with target-specific libraries for linking with generated code
    • [x] Linker knows about runtime crates and which to link for each target
    • [x] Atom table initialized from constants gathered during compilation at runtime start
    • [ ] Codegen generates metadata for scheduler so it knows how to start the compiled application, i.e. it knows what function to call for init:start (See Init)
    • [ ] Scheduler needs to know when a process has yielded control due to a request, due to an error being thrown but not caught, or due to process exit (both normally or otherwise). (See Signals)
    • [ ] Runtime needs the ability to construct Erlang-style stack traces. We need to store the necessary metadata somewhere, and provide a way to access it from the runtime. (See Traces)
    • [ ] Scheduler needs to know how to call into generated code when resuming a process (See Calling Convention, Init)

    Init

    As a refresher, here's the basic startup routine for an Erlang application:

    • VM starts and initializes the runtime
    • VM hands control to Erlang by invoking/starting init
    • init performs core initialization, reads the boot script containing instructions on how to start the system (i.e. which applications to load/start, in what order, and what functions to call to do so)
    • init starts the application_controller
    • init walks the boot script, collaborating with the application_controller to load/start applications in the order given; each application is started by starting an application_master for that application.
    • each application_master starts its application by invoking the start/2 callback (and others as well)

    We will want to mirror this process more or less. In the near term though, we don't have all of the OTP infrastructure in place, we're working with pure Erlang + our runtime, so things differ a bit. Here's more or less what I'm operating on as a starting point:

    • Core runtime starts (liblumen_crt - handles calling into the target-specific runtime crate, initializes the atom table)
    • Core runtime calls the target-specific runtime entry point (lumen_web/lumen_runtime). This starts the scheduler and performs any other necessary initialization needed by the runtime.
    • Scheduler reads the application metadata to obtain the function pointer for the "main" function of the generated code. This would correspond to the init module. The function takes no arguments and is responsible for kicking off execution of the Erlang part of the application.

    So given a simple Erlang module:

    -module(init).
    -export([main/0]).
    
    main() ->
      io:format("~s\n", ["Hello world!"]).
    

    The scheduler would essentially be invoking the function pointer corresponding to the main/0 function. In this case, the function would just print "Hello world!", then signal to the scheduler that the process is exiting normally, which would result in the scheduler shutting down and the program terminating normally.

    Down the road, when we start introducing the OTP libraries, we will provide the init module, and it will perform the same kind of startup procedure that init in BEAM does.

    I think this provides a nice foundation to build on. The key parts to decide are:

    • What metadata is needed in the scheduler
    • How does that metadata get generated/stored
    • How does the scheduler access it

    Signals

    In order for linking against the runtime to work at all, we need the ability to signal various conditions between the scheduler and the generated code which needs to branch based on those conditions.

    • The scheduler needs to know when control returns from the process code whether it returned due to yielding voluntarily (or due to reduction count/garbage collection), whether the process threw an error that was not caught, or whether the process is exiting (normally or abnormally).
    • The generated code needs to know that the scheduler wants it to yield after control returns from a call to the runtime. Likewise, the scheduler may want to signal an exit. When control is yielded to the scheduler, it can do these things without coordination, but when calling into the runtime, the scheduler needs cooperation from the process.
    • Beyond just signals, the generated code also needs to be able to monitor/maintain the reduction count on its own. In order to facilitate this, we would use a thread local that both the scheduler and the generated code can check/modify.

    My feeling is that we would use a set of thread locals to facilitate these things:

    • A thread local for the process signal, an enum which represents the different types of signals
    • Thread locals for any signal metadata that is necessary for specific signal types (i.e. an error value for errors that bubble up to the scheduler)
    • A thread local for the reduction count

    Calling Convention

    Related to Signals, is the calling convention between our Rust runtime code, and the generated Erlang code.

    If we decide not to use thread locals to signal error conditions/yielding/etc., then the next alternative is to have an ABI/calling convention that allows for effective multi-value returns. In practice that means using a struct with a field to carry the error/status flag, plus one or more fields based on the function's "real" return type. Where this gets ugly is that we already need something like this to communicate function-specific errors, so combining both types of signals into one convention is not ideal.

    "Global" signals aside, we can pretty trivially land on an FFI-safe Result/Option equivalent for use at our runtime boundaries. The Option case is actually the easiest, since we can make use of the none value/tag to represent the None variant. The presence of None could then be used to signal that an error occurred, where the error term itself is either stored in the process struct, or in a thread local (similar to errno for example).

    Other than that, I think for the near term we'll use the C ABI for all functions - in the future we'll likely use a separate calling convention for Erlang functions, akin to fastcc, but I don't think we have enough information yet on how code will get called, and we can't call from Rust into fastcc code directly, at least not safely.

    Traces

    To reiterate, there are two kinds of traces:

    1. Erlang stack traces, i.e. what you see in error reports from SASL, etc. Displayed in Erlang term format, so easy to parse/understand for users. Matches what was happening from an Erlang perspective, but no relation to the actual thread of execution. Produced when something goes wrong in user code.
    2. Native stack traces, i.e. not language-oriented, often shows a lot of frames for code you don't own but was called between frames of your own code. In the worse case, the names are mangled and barely readable. The closest thing to a source of truth about where your application was at when it failed. Produced when shit hits the fan in the runtime itself, or some natively-implemented code fails.

    The Erlang code we're compiling generates debug symbols with locations that can be linked back to the original source code. Obtaining a native trace with those symbols is fairly easy (with some work to integrate libbacktrace, or something like it). On the other hand, obtaining an Erlang stack trace is not so easy. There is no easy way to get a trace of locations that correspond only to Erlang code.

    Instead we have to construct one somehow. One approach is to maintain our own frame stack that is used solely for Erlang traces, but that isn't ideal - it introduces overhead during execution, and duplicates effort that we'd prefer to avoid duplicating. That said, it has the best chance of mirroring the traces like seen in BEAM.

    The approach I would like to try though, is to leverage the debug symbols we generate + libbacktrace. When capturing a trace, we capture a native trace; and when formatting for display, or constructing a term representation, we walk the captured trace and only print symbols for Erlang code.

    A mixture of the two approaches could be to have the function prologue push a pointer to the frame metadata on to a stack set aside for Erlang traces. Capturing a trace would take a snapshot of the stack, and formatting/constructing it would follow a similar pattern as the native traces, by resolving the locations in the stack to their corresponding symbols/metadata.

    In any case, this is pretty important for getting proper stack traces, but we can probably work around the absence of the functionality in the near term. I'd mostly like to land on a direction though sooner rather than later.

    compiler RFC 
    opened by bitwalker 7
  • :maps.get/3

    :maps.get/3

    I have no idea if I did this right, and I think there is more work required to getting it to work w/ the AOT compiler, but I figured I'd take a stab at maps:get/2 and maps:get/3 using maps:is_key as an example.

    enhancement runtime/bifs 
    opened by zachdaniel 7
  • Invalid escape sequences

    Invalid escape sequences

    There are invalid escapes sequences according to EIR in the following files in OTP, which Erlang/BEAM deems valid, so EIR's rules need to updated to match:

    • ib/dialyzer/src/typer.erl
    • lib/common_test/src/unix_telnet.erl
    • lib/compiler/src/cerl.erl
    • lib/compiler/src/core_scan.erl
    • lib/edoc/src/edoc_extract.erl
    • lib/edoc/src/edoc_layout.erl
    • lib/edoc/src/edoc_lib.erl
    • lib/edoc/src/edoc_macros.erl
    • lib/edoc/src/edoc_scanner.erl
    • lib/edoc/src/edoc_specs.erl
    • lib/edoc/src/edoc_tags.erl
    • lib/inets/src/http_server/httpd_request.erl
    • lib/inets/src/http_server/mod_esi.erl
    • lib/inets/src/http_server/mod_range.erl
    • lib/kernel/src/group.erl
    • lib/kernel/src/os.erl
    • lib/kernel/src/user_drv.erl
    • lib/megaco/src/engine/megaco_sdp.erl
    • lib/observer/src/cdv_bin_cb.erl
    • lib/observer/src/cdv_term_cb.erl
    • lib/observer/src/crashdump_viewer.erl
    • lib/os_mon/src/disksup.erl
    • lib/reltool/src/reltool.hrl
    • lib/sasl/src/systools_make.erl
    • lib/stdlib/src/edlin.erl
    • lib/stdlib/src/erl_scan.erl
    • lib/stdlib/src/escript.erl
    • lib/stdlib/src/io_lib.erl
    • lib/stdlib/src/io_lib.erl
    • lib/stdlib/src/io_lib_format.erl
    • lib/stdlib/src/io_lib_pretty.erl
    • lib/tools/src/tags.erl
    • lib/xmerl/src/xmerl_regexp.erl
    bug compiler erlang/otp 
    opened by KronicDeth 6
  • Not compiling the erlang.org hello world

    Not compiling the erlang.org hello world

    Same error for these two variants:

    -module(hello).
    -export([hello_world/0]).
    hello_world() -> io:fwrite("hello, world\n").
    
    -module(hello).
    -export([hello_world/0]).
    
    hello_world() -> erlang:display("hello, world\n").
    
    lawik@MacBook-Pro lumen-testing % lumen compile hello.erl
       Compiling hello.erl
    Assertion failed: (impl && "isa<> used on a null type."), function isa, file /Users/paulschoenfelder/src/github.com/bitwalker/llvm-project/mlir/include/mlir/IR/Types.h, line 292.
    zsh: abort      lumen compile hello.erl
    
    bug compiler 
    opened by lawik 5
  • Implement IOList BIFs

    Implement IOList BIFs

    Resolves #153.

    • [x] Implement and test iolist_to_binary/1
    • [x] Implement and test iolist_size/1
    • [x] Implement and test iolist_to_iovec/1
    • [x] Add testing for SubBinary
    • [x] Figure out whether to add testing for MatchContext This doesn't seem to be possible yet.
    enhancement 
    opened by mlwilkerson 5
  • Global process counter and table

    Global process counter and table

    Changelog

    Enhancements

    • Global local process counter, so getting pids for processes no long needs to go through the Process's Environment.
    • Process by pid map is moved to process::local. Processes are always passed as immutable references and interior mutability is used to support global process map trait requirements and to support smaller locks for future work.
    • Make all process arenas private to eliminate the needs for Arcs around arenas when it is know the callers would always be in the same process.
      • Don't pass the arenas outside of impl Process to eliminate need for RefCell.
    • Don't fake Send and Syn on Process.
      • Switch from im_rc to im so that the map arena is Send.
      • Make heap::Binary Send to override *const u8 in bytes not being Send. heap::Binary are immutable after create, so they can be safely Send even with the raw pointer.
      • Wrap all arenas in Process in Mutex. In most cases there will only be one be accessed by the one scheduler thread. The Mutex is only needed to make Sync derivation happy and potentially during scheduler hand-off.

    Incompatible Changes

    • Eliminate Environment as anything that would have been in the Environment will not be global to eliminate unnecessary pointer hops.
    enhancement 
    opened by KronicDeth 5
  • Bump crossbeam-channel from 0.4.3 to 0.4.4

    Bump crossbeam-channel from 0.4.3 to 0.4.4

    Bumps crossbeam-channel from 0.4.3 to 0.4.4.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 4
  • Overflow on a small integer causes a runtime error: `ImmediateOutOfRangeError`

    Overflow on a small integer causes a runtime error: `ImmediateOutOfRangeError`

    The following valid Erlang code will compile, but will cause a runtime error ImmediateOutOfRangeError:

    -module(init).
    -export([boot/1]).
    
    boot(_) ->
        X = factorial(20),
        erlang:display(X).
    
    -spec factorial(integer()) -> integer().
    factorial(0) ->
        1;
    factorial(N) when N > 0 ->
        N * factorial(N - 1).
    

    Expected Result

    It should display the correct value for factorial(20), which is 2432902008176640000.

    Actual Result

    It causes a runtime error ImmediateOutOfRangeError:

    % firefly compile -C no_default_init --bin smallnum_overflow.erl
       Compiling smallnum_overflow.erl
        Compiled init
          Linker generated executable to _build/firefly/arm64-apple-macosx11.0.0/smallnum_overflow
        Finished built smallnum_overflow in 829ms
    
    $ _build/firefly/arm64-apple-macosx11.0.0/smallnum_overflow
    thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ImmediateOutOfRangeError',
      runtimes/tiny/src/erlang/mod.rs:82:5
    note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
    fatal runtime error: failed to initiate panic, error 5
    zsh: abort      _build/firefly/arm64-apple-macosx11.0.0/smallnum_overflow
    

    Variations

    Changing the first function clause to return a different value will result different behavior:

    This one will still cause the runtime error ImmediateOutOfRangeError. It should return 22439560350624250504263050062725120000 for factorial(20).

    factorial(0) ->
        2 bsl 62;
    

    This one will not cause a runtime error, but will return a wrong value 0 for factorial(20). The correct value is 44879120701248501008526100125450240000.

    factorial(0) ->
        2 bsl 63;
    

    This one will not cause a runtime error, and will return the correct value 89758241402497002017052200250900480000 for factorial(20).

    factorial(0) ->
        2 bsl 64;
    

    Environment

    • Firefly: develop branch, commit: 323e0ca6 (Sep 12, 2022 UTC)
    • macOS Monterey 12.6 (arm64)
    • Rust: nightly-2022-08-08
    • LLVM: firefly-15.0.0-dev_2022-08-27/clang+llvm-15.0.0-arm64-apple-darwin21.6.0.tar.gz
    opened by tatsuya6502 0
  • Some arithmetic expressions will not compile with error: failed to legalize operation 'cir.cast'

    Some arithmetic expressions will not compile with error: failed to legalize operation 'cir.cast'

    The following valid Erlang code will not compile. The Firefly compiler will emit an error: "failed to legalize operation 'cir.cast'".

    -module(init).
    -export([boot/1]).
    
    boot(_Args) -> ok.
    
    a(X, Y, Z) ->
        (X + Y) div Z.
    
    b() ->
        2 bsl 32 - 1.
    
    $ firefly compile -C no_default_init --bin init.erl
       Compiling init.erl
    !cir.number
    !cir.integer
    error: failed to legalize operation 'cir.cast'
       ┌─ init.erl:12:1
       │
    12 │ ╭ ╭     (X + Y) div Z.
    13 │ │ │
       │ ╰─│^ during generation of mlir associated with this source code
       │   ╰' see current operation: %8 = "cir.cast"(%6#1) {src = [!cir.number]} : (!cir.number) -> !cir.integer
    
    module @init attributes {llvm.data_layout = "e-m:o-i64:64-i128:128-n32:64-S128", llvm.target_triple = "arm64-apple-macosx11.0.0\00"} {
      cir.dispatch_table @init {
        cir.dispatch_entry "boot", 1, @"init:boot/1"
        cir.dispatch_entry "module_info", 1, @"init:module_info/1"
        cir.dispatch_entry "module_info", 0, @"init:module_info/0"
      }
      func @"init:module_info/0"() -> (i1, !cir.term) attributes {personality = @firefly_eh_personality} {
    
    ...
    

    Environment

    • Firefly: develop branch, commit: 323e0ca6 (Sep 12, 2022 UTC)
    • macOS Monterey 12.6 (arm64)
    • Rust: nightly-2022-08-08
    • LLVM: firefly-15.0.0-dev_2022-08-27/clang+llvm-15.0.0-arm64-apple-darwin21.6.0.tar.gz
    opened by tatsuya6502 0
  • Compiler will get segmentation fault for some kinds of `when` clauses on function

    Compiler will get segmentation fault for some kinds of `when` clauses on function

    The following valid Erlang code will cause the Firefly compiler to get segmentation fault:

    -module(init).
    -export([boot/1]).
    
    boot(_Args) -> ok.
    
    a(X) when X rem 10 == 0 ->
        ok.
    
    b(Y) when Y div 10 > 2 ->
        ok.
    
    $ firefly compile -C no_default_init --bin init.erl
       Compiling init.erl
    zsh: segmentation fault  firefly compile -C no_default_init --bin init.erl
    

    Environment

    • Firefly: develop branch, commit: 323e0ca6 (Sep 12, 2022 UTC)
    • macOS Monterey 12.6 (arm64)
    • Rust: nightly-2022-08-08
    • LLVM: firefly-15.0.0-dev_2022-08-27/clang+llvm-15.0.0-arm64-apple-darwin21.6.0.tar.gz
    opened by tatsuya6502 0
  • Docker images of working Firefly installation Please

    Docker images of working Firefly installation Please

    TL;DR

    I was recently unable to build Firefly myself and I would love to use a Docker image made by someone who has successfully built it. I might be willing to dedicate more time to get it to build but it would be a lot nicer if I could try using it in a Docker environment to try out some things first.

    The long story:

    Hi all, I love the work you all are doing and I have high hopes for this project.

    Disclaimers: I am new to Rust; I have written only Hello World types of programs in Rust, but I have installed Rust on several development computers in the last few years. I really have a love of Erlang/Elixir and I can see how Firefly would be a great benefit to the BEAM ecosystem so I am taking an interest in this. I think Rust has great potential as well and this is a great application of it.

    This time around installing Rust on a new computer, I had no problem at all installing Rust and I was even able to try a few different nightly builds of Rust to try and compile Firefly successfully. In the nightly build that Firefly recommends for most stability (because it matches the version that the CI uses), I had several issues relating to "features" and "core_c_str" or something like that and I even tried learning how Rust "features" work in Cargo and kind of made progress but that's probably a fairly advanced thing to be worrying about for someone with my level of Rust/Cargo knowledge.

    Rather than open an issue about the compiler errors and my installation problems, I thought it would be more efficient to ask for a working Docker container because I'm sure several of the project authors are able to compile the project and work on it and I would mostly love to try out the tool and see if I can use it for my use cases. If I am able to do some things with it, I can more easily justify further investigations into Rust and figuring out how to build Firefly by myself.

    One of my use cases is compiling Erlang code down to Web Assembly so I think that's the sort of thing that would be doable from a Docker container, since it doesn't have anything to do with cross-compiling to other operating systems and things (which I think can be difficult in Docker sometimes).

    opened by steele232 3
  • Implementing ETS

    Implementing ETS

    Implementing ETS

    Erlang Term Storage (ETS) is built on top of two data structures (depending on the options passed to ets:new/2) - an AVL tree in the general case (based on the paper by Adelson-Velski and Landis), and a CA tree [1][2] for ordered sets with write concurrency enabled. For our purposes, we aim to build our ETS implementation on top of WAVL trees [3]. Some interesting references are included below [4][5][6]. Work on the compiler has kept us from building this yet, so this is a great way to get involved with the project!

    Goals

    • Implement ETS as a library in Rust that can be plugged in to our runtime by writing BIFs that interact with the implementation. Ideally the implementation shouldn't be dependent on runtime specifics like the scheduler, processes, or term representation. If it turns out that such specifics are critical to the implementation though, we can revisit
    • Should make use of safe Rust abstractions where possible
    • Should provide futures-based APIs for long-running functions (e.g. ets:insert/2 with a list of objects to insert). Functions which are "short enough" don't need to be written as futures, but if implementation is easier that way, it's fine. NOTE: It is essential that these futures do not attempt to hold locks across yield points, if Rust will even allow that in its type system.
    • ETS tables should be reference counted, and likely the values they contain as well, so that access by multiple processes is safe, and garbage collection of the tables and their contents can happen naturally.
    • While correctness is the most important property, attempts should be made to keep the implementation as performance-sensitive as possible, since ETS is so heavily relied upon.

    References

    1. A Contention Adapting Approach to Concurrent Ordered Sets, 2018
    2. More Scalable Ordered Set for ETS Using Adaptation, 2014
    3. WAVL Trees
    4. Immutable AVL Trees
    5. ERTS AVL Tree
    6. ERTS CA Tree
    runtime runtime/bifs help wanted 
    opened by bcardarella 8
Releases(0.1.0-nightly-2020-09-02)
Owner
Lumen
Lumen
🚀Wasmer is a fast and secure WebAssembly runtime that enables super lightweight containers to run anywhere

Wasmer is a fast and secure WebAssembly runtime that enables super lightweight containers to run anywhere: from Desktop to the Cloud, Edge and IoT devices.

Wasmer 14.1k Jan 8, 2023
A high-performance, secure, extensible, and OCI-complaint JavaScript runtime for WasmEdge.

Run JavaScript in WebAssembly Now supporting wasmedge socket for HTTP requests and Tensorflow in JavaScript programs! Prerequisites Install Rust and w

Second State 219 Jan 3, 2023
A standalone Forth interpreter/compiler for WebAssembly.

ForSM A standalone Forth interpreter/compiler for WebAssembly. Bootstrapped from a Rust program, but the ultimate goal for it is to be self-hosting. A

Simon Gellis 5 Jun 15, 2022
Wasmcraft a compiler from WebAssembly to Minecraft Java Edition datapacks

Wasmcraft is a compiler from WebAssembly to Minecraft Java Edition datapacks. Since WebAssembly is a well-supported target for many languages, this means that you can run code written in e.g. C in Minecraft.

null 64 Dec 23, 2022
⚙️ Experimental JVM bytecode to WebAssembly compiler

⚙️ montera Final year university project: a highly experimental JVM bytecode to WebAssembly compiler ⚠️ Do NOT use this for serious projects yet! It's

MrBBot 7 Oct 24, 2022
WebAssembly to Lua translator, with runtime

This is a WIP (read: absolutely not ready for serious work) tool for translating WebAssembly into Lua. Support is specifically for LuaJIT, with the se

null 43 Dec 31, 2022
Wasm runtime written in Rust

Wasm runtime written in Rust

Teppei Fukuda 1 Oct 29, 2021
Wasmtime - Standalone JIT-style runtime for WebAssembly, using Cranelift

wasmtime A standalone runtime for WebAssembly A Bytecode Alliance project Guide | Contributing | Website | Chat Installation The Wasmtime CLI can be i

Bytecode Alliance 11.1k Jan 2, 2023
Standalone JIT-style runtime for WebAssembly, using Cranelift

wasmtime A standalone runtime for WebAssembly A Bytecode Alliance project Guide | Contributing | Website | Chat Installation The Wasmtime CLI can be i

Bytecode Alliance 11.1k Dec 31, 2022
Lunatic is an Erlang-inspired runtime for WebAssembly

Lunatic is a universal runtime for fast, robust and scalable server-side applications. It's inspired by Erlang and can be used from any language that

Lunatic 3.7k Jan 9, 2023
Client for integrating private analytics in fast and reliable libraries and apps using Rust and WebAssembly

TelemetryDeck Client Client for integrating private analytics in fast and reliable libraries and apps using Rust and WebAssembly The library provides

Konstantin 2 Apr 20, 2022
A console and web-based Gomoku written in Rust and WebAssembly

?? rust-gomoku A console and web-based Gomoku written in Rust and WebAssembly Getting started with cargo & npm Install required program, run # install

namkyu1999 2 Jan 4, 2022
Let's combine wasi-nn and witx-bindgen and see how it goes!

WASI-NN Experiment (API Docs) Experiments with wasmtime, the wasi-nn proposal, and tract. Getting Started To use this experiment, you will first need

Hammer of the Gods 3 Nov 4, 2022
darkforest is a console and web-based Roguelike written in Rust and WebAssembly.

darkforest darkforest is a console and web-based Roguelike written in Rust and WebAssembly. Key Features TBA Quick Start TBA How To Contribute Contrib

Chris Ohk 5 Oct 5, 2021
Autogenerated async RPC bindings that instantly connect a JS frontend to a Rust backend service via WebSockets and WASM.

Turbocharger Autogenerated async RPC bindings that instantly connect a JS frontend to a Rust backend service via WebSockets and WASM. See https://gith

null 28 Jan 2, 2023
Simple file sharing with client-side encryption, powered by Rust and WebAssembly

Hako Simple file sharing with client-side encryption, powered by Rust and WebAssembly Not feature-packed, but basic functionalities are just working.

Jaehyeon Park 30 Nov 25, 2022
A handy calculator, based on Rust and WebAssembly.

qubit ?? Visit Website To Use Calculator Example ?? Visit Website To Use Calculator 2 + 2

Abhimanyu Sharma 55 Dec 26, 2022
Rust WebGL2 wrapper with a focus on making high-performance WebAssembly graphics code easier to write and maintain

Limelight Limelight is a WebGL2 wrapper with a focus on making high-performance WebAssembly graphics code easier to write and maintain. demo.mov live

drifting in space 27 Dec 30, 2022
Zaplib is an open-source library for speeding up web applications using Rust and WebAssembly.

⚡ Zaplib Zaplib is an open-source library for speeding up web applications using Rust and WebAssembly. It lets you write high-performance code in Rust

Zaplib 1.2k Jan 5, 2023