Risc-V hypervisor for TEE development

Related tags

Utilities salus
Overview

A micro hypervisor for RISC-V systems.

REUSE status

Quick Start

Building (using Bazel)

git submodule update --init
bazel build //:salus-all

Running

Prerequisites

Salus:

QEMU:

  • Out-of-tree patches are required; see table below.
  • Install libslirp-dev for QEMU to build SLIRP network stack
  • Build using QEMU instructions with --target-list=riscv64-softmmu
  • Set the QEMU= variable to point to the compiled QEMU tree when using the run_* scripts described below.

Linux kernel:

  • Out-of-tree patches are required; see table below.
  • Build: ARCH=riscv CROSS_COMPILE=riscv64-unknown-linux-gnu- make defconfig Image
  • Set the LINUX= variable to point to the compiled Linux kernel tree when using the linux related run_* scripts described below.

Buildroot:

  • Out-of-tree patches are required; see table below.
  • Build: make qemu_riscv64_virt_defconfig && make
  • Set the BUILDROOT= variable to point to the buildroot source directory while running run_buildroot.sh script described below.

Debian:

  • Download and extract a pre-baked riscv64-virt image from https://people.debian.org/~gio/dqib/.
  • Set the DEBIAN= variable to point to the extracted archive when using the run_debian.sh script described below.

Latest known-working branches:

Project Branch
QEMU https://github.com/rivosinc/qemu/tree/salus-integration-10312022
Linux https://github.com/rivosinc/linux/tree/salus-integration-10312022
Buildroot https://github.com/rivosinc/buildroot/tree/salus-integration-2022.08.2

Running Salus under QEMU

What were make targets in the Make/Cargo build are now shell scripts.

From the top level directory, run

scripts/run_tellus.sh

Many of the variable can be overwritten using environment variables on the command line. For example, to use a different version of qemu and 3 cores, you can do the following:

QEMU=/scratch/qemu-salus NCPU=3 scripts/run_tellus.sh

All the other make targets to run salus with linux work analogously.

Linux VM

The scripts/run_linux.sh script will boot a bare Linux kernel as the host VM that will panic upon reaching init due to the lack of a root filesystem.

To boot a more functional Linux VM, use the scripts/run_debian.sh script which will boot a Debian VM with emulated storage and network devices using pre-baked Debian initrd and rootfs images.

Example:

  QEMU=<path-to-qemu-directory> \
  LINUX=<path-to-linux-tree> \
  DEBIAN=<path-to-pre-baked-image> \
  scripts/run_debian.sh

To boot a quick functional Linux VM with busybox based rootfs built from buildroot, use the scripts/run_buildroot.sh script. The above buildroot tree must be compiled to generate the rootfs with networking enabled.

Example:

    QEMU=<path-to-qemu-directory> \
    LINUX=<path-to-linux-tree> \
    BUILDROOT=<path-to-buildroot repo>
    scripts/run_buildroot.sh

Once booted, the VM can be SSH'ed into with root:root at localhost:7722.

Additional emulated devices may be added with the EXTRA_QEMU_ARGS Makefile variable. Note that only PCI devices using MSI/MSI-X will be usable by the VM. virtio-pci devices may also be used with iommu_platform=on,disable-legacy=on flags.

Example:

   EXTRA_QEMU_ARGS="-device virtio-net-pci,iommu_platform=on,disable-legacy=on" \
   ... \
   scripts/run_debian.sh

Test VM

A pair of test VMs are located in test-workloads.

tellus is a target build with bazel build //test-workloads:tellus_guestvm_rule that runs in VS mode and provides the ability to send test API calls to salus running in HS mode.

guestvm is a test confidential guest. It is started by tellus and used for testing the guest side of the TSM API.

Once it has been build, you can use the command below to run it.

    QEMU=<path-to-qemu-directory> \
    scripts/run_tellus.sh

This will boot salus, tellus, and the guestvm using the specified QEMU.

Development

Bazel

One important difference between Bazel and Cargo is in the handling of crate dependencies. If you change a dependency, Cargo will pick it up automatically. But with Bazel, you must sync the changes. There is a script provided to help you do that. To repin the changes, you can just run scripts/repin.sh.

Overview - Initial prototype

  +---U-mode--+ +-----VS-mode-----+ +-VS-mode-+
  |           | |                 | |         |
  |           | | +---VU-mode---+ | |         |
  |   Salus   | | | VMM(crosvm) | | |  Guest  |
  | Delegated | | +-------------+ | |         |
  |   Tasks   | |                 | |         |
  |           | |    Host(linux)  | |         |
  +-----------+ +-----------------+ +---------+
        |                |               |
   TBD syscall   SBI (COVH-API)    SBI(COVG-API)
        |                |               |
  +-------------HS-mode-----------------------+
  |       Salus                               |
  +-------------------------------------------+
                         |
                        SBI
                         |
  +----------M-mode---------------------------+
  |       Firmware(OpenSBI)                   |
  +-------------------------------------------+

Host

Normally Linux, this is the primary operating system for the device running in VS mode.

Responsibilities:

  • Scheduling
  • Memory allocation (except memory kept by firmware and salus at boot)
  • Guest VM start/stop/scheduling via COVH-API provided by salus
  • Device drivers and delegation

VMM

The virtual machine manager that runs in userspace of the host.

  • qemu/kvm or crosvm
  • configures memory and devices for guests
  • runs any virtualized or para virtualized devices
  • runs guests with vcpu_run.

Guests

VS-mode operating systems started by the host.

  • Can run confidential or shared workloads.
  • Uses memory shared from or donated by the host
  • scheduled by the host
  • can start sub-guests
  • Confidential guests use COVG-API for salus/host services

Salus

The code in this repository. An HS-mode hypervisor.

  • starts the host and guests
  • manages stage-2 translations and IOMMU configuration for guest isolation
  • delegates some tasks such as attestation to u-mode helpers
  • measured by the trusted firmware/RoT

Firmware

M-mode code.

OpenSBI currently boots salus from the memory (0x80200000) where qemu loader loaded it and passes the device tree to Salus.

The above instructions use OpenSBI inbuilt in Qemu. If OpenSBI needs to be built from scratch, fw_dynamic should be used for -bios argument in the qemu commandline.

Vectors

Salus is able to detect if the CPU supports the vector extension. The same binary will run on processors with or without the extension, and will enable vector code if it is present.

Comments
  • build: Check every commit can be built

    build: Check every commit can be built

    It'd be nice if we could have our CI verifying that each and every commit from a PR can be properly built. The main reason for that is to be able to run a git bisect if later in the future we're trying to understand where something might have gone wrong.

    opened by sboeuf 12
  • Move MMIO region registration to the guest

    Move MMIO region registration to the guest

    As @atishp04 and I have found, the other confidential compute implementations (pKVM, TDX, SEV) have the TVM decide what's MMIO and what isn't -- pKVM registers the regions via hypercalls, TDX & SEV have a more convoluted scheme where a page fault gets reflected back to the TVM which it can then turn into a hypercall. So, let's make tvm_add_emulated_mmio_region() a guest-side SBI call in a separate TEE-Guest extension to be consistent and avoid having to introduce our own KVM ioctl() to register MMIO registers on the host side.

    We should probably also do the same with shared memory. The host can tell the TVM via device-tree or whatever where it wants the shared memory region to be, and then the guest effectively "accepts" that as the shared memory region by making a tvm_add_shared_memory_regino() SBI call.

    In both cases we can notify the host that the TVM has registered these regions by setting up the shared-memory block to look like an ECALL exit from the TVM.

    kvm 
    opened by abrestic-rivos 12
  • Use shared-memory to handle vCPU exits

    Use shared-memory to handle vCPU exits

    This patch defines the shared-memory structure used to communicate non-confidential state between the TSM and host, and the API to register it.

    Two open questions:

    • Should we try to replicate more standard RISC-V registers, or continue to use the custom fields/values we've defined previously? i.e. use sanitized scause, htval, htinst, etc values rather than our custom exit_cause0/exit_cause1 plus associated enums. The former will have better re-use with any sort of nested virtualization extension (and existing KVM code in general), while the latter will be more space efficient and theoretically mean less work for the OS/VMM to do on vCPU exit (e.g. no need to decode instructions again, or look up if a page fault is confidential or not). I ended up going with the standard RISC-V register approach here to reduce the overall surface area of the API. (I realize that my re-use of htinst for WFI isn't standard-compliant.)
    • Do we want to reserve entire pages for these shared memory structs, or let the OS/VMM place them wherever they want (in non-confidential memory, of course)? I'm leaning towards the latter since it's mostly informational for the OS/VMM, i.e. causing the structs to overlap or otherwise place them somewhere where they could be freely read/written won't violate the confidentiality of a TVM.

    Thoughts?

    opened by abrestic-rivos 10
  • gdb debugging

    gdb debugging

    I'm trying to use gdb to debug with the bazel-bin/salus, some steps are as below, but it looks like the gdb can't find the the source code for example main.rs based on symbol file bazel-bin/salus. Has anyone one worked with gdb ? Thanks

    $ gdb-multiarch bazel-bin/salus $ set architecture riscv:rv64 $ target remote localhost:1234 $ set directories ./src/ $ add-symbol-file ./bazel-bin/test-workloads/guestvm $ add-symbol-file ./bazel-bin/test-workloads/tellus

    opened by yli147 9
  • Vm: Add support to probe Nacl features.

    Vm: Add support to probe Nacl features.

    In the recent updates to Nacl extension [0], feature probing was introduced and allows guest hypervisor to probe which features are supported by the host hypervisor.

    Also, the SetShmem FID argument has changed from pfn to full address of the shared memory page.

    [0] https://lists.riscv.org/g/sig-hypervisors/message/256

    opened by rajnesh-kanwal 6
  • Move to a more explicit memory sharing model

    Move to a more explicit memory sharing model

    In anticipation for the huge pages support, the way to share/unshare memory from a TVM is updated with a more explicit model that expects the host to drive the entire operation. Relying on some new TEE-Host functions, the host can invalidate and remove a specific range of memory based on the request from the TVM.

    opened by sboeuf 6
  • SBI DBCN Console in Salus

    SBI DBCN Console in Salus

    Can we implement SBI debug console extension in Salus ? https://lists.riscv.org/g/tech-unixplatformspec/message/1815 The Linux patches can be found in the above email. Anup reports that it improves guest booting time by ~30%. It should help boot time of Linux in Salus as well

    opened by atishp04 6
  • Add PMU functionality...

    Add PMU functionality...

    This patch series adds PMU counter support.

    1. The first commit adds PMU SBI extension types
    2. The second commit adds convenience wrappers for the SBI calls
    3. The third commit adds a PMU driver
    4. The fourth implements a pass-through layer for PMU SBI calls
    5. The fifth commit adds some counter booking
    6. The sixth commit adds rudimentary support for counter save and restore
    7. The seventh commit adds PMU counter virtualization for HW counters
    8. The final commit adds test functionality
    opened by atulkharerivos 6
  • Increase the default number of imsic file to 5

    Increase the default number of imsic file to 5

    If host scheduler tries to schedule more number of vcpus on a pcpu that its number of imsic file, it will result in TVM failure. This is a common scenario when we run two TVMs with 2 vcpus which is a common test case.

    Increasing the number of imsic files only helps to run the bare minimum setup in multi-TVM SMP configuration.

    In the long run, the host kernel can't do much about it except notifying the VMM about the correct reason i.e. insufficient imsic files. Its upto the VMM to decide whether it wants to show appropriate error to the user or perform lazy pinning if possible.

    opened by atishp04 5
  • Align the TEE[HGI]extension names with CoVE

    Align the TEE[HGI]extension names with CoVE

    This needs to be merged after https://github.com/rivosinc/sbi-rs/pull/15

    As mentioned in that PR, This patch only updates the extension name & ID as per the latest naming changes in the spec. We can modify all the reference of "tee" to cove as well to align it well or leave it as it is for now.

    opened by atishp04 5
  • Rework vCPU shared memory register set enumeration

    Rework vCPU shared memory register set enumeration

    Instead of having single call that writes the layout to host memory and that the host must retry if they provide insufficient buffer space, take an approach similar to the SBI PMU interface where a one call returns the number of register sets (TvmCpuNumRegisterSets) and another returns the location of a specific register set (TvmCpuGetRegisterSet).

    As part of this change, RegisterSetLocation is compressed to a single u32 so that it's guaranteed to fit in a single general-purpose register. The version field is dropped and instead the versioning is implicit from the RegisterSetId.

    opened by abrestic-rivos 5
  • A Few Questions about riscv_iommu.c

    A Few Questions about riscv_iommu.c

    While playing around with Salus, I notice that a virtual IOMMU was enabled since there are prints(when booting debian image) like: Found RISC-V IOMMU version 0x2,and

    [    0.518950] iommu: Default domain type: Translated 
    [    0.519827] iommu: DMA domain TLB invalidation policy: strict mode 
    

    Then I find a file called riscv_iommu.c (https://github.com/rivosinc/qemu/blob/salus-integration-10312022/hw/riscv/riscv_iommu.c), after reading the code I believe that this c file is the implementation codes for the virtual IOMMU. I noticed that the vIOMMU was designed as a PCI device as the TypeInfo implies:

    static const TypeInfo riscv_iommu_pci = {
        .name = TYPE_RISCV_IOMMU_PCI,
        .parent = TYPE_PCI_DEVICE,
        .instance_size = sizeof(RISCVIOMMUStatePci),
        .class_init = riscv_iommu_pci_init,
        .interfaces = (InterfaceInfo[]) {
            { INTERFACE_PCIE_DEVICE },
            { },
        },
    };
    

    For comparison, the TypeInfo in intel_iommu.c(https://github.com/rivosinc/qemu/blob/salus-integration-10312022/hw/i386/intel_iommu.c) imples:

    static const TypeInfo vtd_info = {
        .name          = TYPE_INTEL_IOMMU_DEVICE,
        .parent        = TYPE_X86_IOMMU_DEVICE,
        .instance_size = sizeof(IntelIOMMUState),
        .class_init    = vtd_class_init,
    };
    

    Here comes the first question. Why implement the IOMMU as a PCI device? As my humble knowledge goes, a typical IOMMU usually be integrated with the CPU. Maybe doing this for developing convenience?

    Put aside the first question, I then notice that:

    k->vendor_id = PCI_VENDOR_ID_RIVOS;
    k->device_id = PCI_DEVICE_ID_RIVOS_IOMMU;
    ...
    k->class_id = 0x0806;
    

    in the riscv_iommu_pci_init function. I decided to observe the behaviors of the vIOMMU. In qemu-monitor, using info pci, it says:

    (qemu) info pci
    info pci
      Bus  0, device   0, function 0:
        Host bridge: PCI device 1b36:0008
          PCI subsystem 1af4:1100
          id ""
      Bus  0, device   1, function 0:
        Class 0264: PCI device 1b36:0010
          PCI subsystem 1af4:1100
          IRQ 0, pin A
          BAR0: 64 bit memory at 0x100000000 [0x100003fff].
          id ""
      Bus  0, device   2, function 0:
        Class 2054: PCI device 1efd:edf1
          PCI subsystem 1af4:1100
          BAR0: 64 bit memory at 0x17ffff000 [0x17fffffff].
          id ""
      Bus  0, device   3, function 0:
        Ethernet controller: PCI device 1af4:1041
          PCI subsystem 1af4:1100
          IRQ 0, pin A
          BAR1: 32 bit memory at 0x40000000 [0x40000fff].
          BAR4: 64 bit prefetchable memory at 0x100004000 [0x100007fff].
          BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
          id ""
    
    

    Since the PCI_VENDOR_ID_RIVOS,PCI_DEVICE_ID_RIVOS_IOMMU,0x0806 equals to 0x1efd,0xedf1,2054D respectively, I believed that device 2 is the riscv_iommu_pci device. But in debian, trying to find the vIOMMU using lspci -v , it only says:

    root@debian:~# lspci -v 
    00:00.0 Host bridge: Red Hat, Inc. QEMU PCIe Host bridge
    	Subsystem: Red Hat, Inc. QEMU PCIe Host bridge
    	Flags: fast devsel
    lspci: Unable to load libkmod resources: error -2
    
    00:01.0 Non-Volatile memory controller: Red Hat, Inc. QEMU NVM Express Controller (rev 02) (prog-if 02 [NVM Express])
    	Subsystem: Red Hat, Inc. QEMU NVM Express Controller
    	Flags: bus master, fast devsel, latency 0
    	Memory at 100000000 (64-bit, non-prefetchable) [size=16K]
    	Capabilities: [40] MSI-X: Enable+ Count=65 Masked-
    	Capabilities: [80] Express Root Complex Integrated Endpoint, MSI 00
    	Capabilities: [60] Power Management version 3
    	Kernel driver in use: nvme
    
    00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
    	Subsystem: Red Hat, Inc. Virtio network device
    	Flags: bus master, fast devsel, latency 0
    	Memory at 40000000 (32-bit, non-prefetchable) [size=4K]
    	Memory at 100004000 (64-bit, prefetchable) [size=16K]
    	Capabilities: [98] MSI-X: Enable+ Count=4 Masked-
    	Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
    	Capabilities: [70] Vendor Specific Information: VirtIO: Notify
    	Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
    	Capabilities: [50] Vendor Specific Information: VirtIO: ISR
    	Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
    	Kernel driver in use: virtio-pci
    

    That leads to the other question. Why the riscv_iommu_pci device hasn't been recognized correctly by pciutils, but qemu-monitor seems normal? Possibly driver issues? mistaken setting? or any other problem?

    I am researching the CoVE(or AP-TEE) currently, it would be very helpful to me if you answer my questions. :smiling_face_with_tear:

    opened by GDHNES 0
  • guestvm to access flash0

    guestvm to access flash0

    I'm testing code of the scripts/run_tellus.sh, any way to let the guestvm to access to the flash device defined in qemu ?

    The flash device address is 0x20000000 https://github.com/rivosinc/qemu/blob/ae9f6df84c966e3608ed40fdacb55e2567946c60/hw/riscv/virt.c#LL91C9-L91C9

    Thanks if any input

    opened by yli147 0
  • page-tracking: Optimize huge pages maintenance

    page-tracking: Optimize huge pages maintenance

    The goal is to optimize the way the PageTracker updates huge pages, by acting only on the head page rather than updating all the 4k pages from the huge page range.

    When a huge page is mapped into a TVM, we can manage only the head page through the block/unblock lifecycle. When the TVM gets destroyed, we need all the pages to be back to a proper state, which is why we force the subpages to be the same state as the head page.

    We also need to handle the case where a page range is shared differently. In that case, we can't assume two TVMs share the page at the same granularity, which is why we can't manipulate only the head page, but instead we must update all the subpages.

    Last thing, we must handle a page demotion in a special way, by updating all the subpages states, given the demotion happened while the PageTracker was operating only on the head page of the huge page.

    opened by sboeuf 0
  • Clean up tellus / guestvm linking

    Clean up tellus / guestvm linking

    We should be inserting the guestvm binary into the tellus binary via a linker section, similar to how the u-mode binary is injected into salus, rather than the janky script we have that just appends the two binaries together.

    cc: @stillson

    opened by abrestic-rivos 0
  • Map hypervisor text and rodata segments as RO

    Map hypervisor text and rodata segments as RO

    Probably needs some minor linker script massaging to put these on a separate page from the data and bss segments. We should also consider having a separate segment for "RO after boot" data that gets initialized during boot and then marked RO post-boot.

    enhancement 
    opened by abrestic-rivos 0
Owner
Rivos Inc.
Rivos Inc.
A thin-hypervisor that runs on aarch64 CPUs.

How to build the hypervisor By Rust toolchain (TBD) By docker Requirements Docker (Tested by Docker version 20.10.8, build 3967b7d28e) I tested by non

RIKEN R-CCS 54 Dec 12, 2022
A simple hypervisor demonstrating the use of the Intel VT-rp (redirect protection) technology.

Hello-VT-rp A simple hypervisor demonstrating the use of the Intel VT-rp (redirect protection) technology. This repository is a complement of the blob

Satoshi Tanda 22 Jun 23, 2023
Minimal runtime / startup for RISC-V CPUs from Espressif

esp-riscv-rt Minimal runtime / startup for RISC-V CPUs from Espressif. Much of the code in this repository originated in the rust-embedded/riscv-rt re

esp-rs 13 Feb 2, 2023
An experimental RISC-V recompiler

WARNING: All of this code is highly experimental and is a direct result of a two day hacking binge fueled by a truckload of tea. It's definitely not s

Koute 13 Apr 2, 2023
HAL for the CH58x family of microcontrollers. BLE 5.3, RISC-V Qingke V4.

ch58x-hal HAL for the CH58x RISC-V BLE microcotrollers from WCH. This crate is under random and active development. DO NOT USE in production. This sho

WCH MCU for Rust 3 Oct 24, 2023
A command-line tool collection to assist development written in RUST

dtool dtool is a command-line tool collection to assist development Table of Contents Description Usage Tips Installation Description Now dtool suppor

GB 314 Dec 18, 2022
Simple fake AWS Cognito User Pool API server for development.

Fakey Cognito ?? Homepage Simple fake AWS Cognito API server for development. ✅ Implemented features AdminXxx on User Pools API. Get Started # run wit

naokirin 4 Aug 30, 2022
KERI implementation in RUST, current development lead by DIF

KERIOX Introduction Features Introduction KERIOX is an open source Rust implementation of the Key Event Receipt Infrastructure (KERI) , a system desig

WebOfTrust 3 Nov 11, 2022
Rust + Yew + Axum + Tauri, full-stack Rust development for Desktop apps.

rust-yew-axum-tauri-desktop template Rust + Yew + Axum + Tauri, full-stack Rust development for Desktop apps. Crates frontend: Yew frontend app for de

Jet Li 54 Dec 23, 2022
Docgen - a fork of Doctave intending to continue the development.

Docgen Docgen is a fork of Doctave intending to continue the development. All credit goes to the original Docgen authors. Docgen is an opinionated doc

Abdullah Atta 3 Oct 22, 2022
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
A bit like tee, a bit like script, but all with a fake tty. Lets you remote control and watch a process

teetty teetty is a wrapper binary to execute a command in a pty while providing remote control facilities. This allows logging the stdout of a process

Armin Ronacher 259 Jan 3, 2023
RISC Zero is a zero-knowledge verifiable general computing platform based on zk-STARKs and the RISC-V microarchitecture.

RISC Zero WARNING: This software is still experimental, we do not recommend it for production use (see Security section). RISC Zero is a zero-knowledg

RISC Zero 653 Jan 3, 2023
Rust API to the OS X Hypervisor framework for hardware-accelerated virtualization

hypervisor-rs hypervisor is a Rust library that taps into functionality that enables hardware-accelerated execution of virtual machines on OS X. It bi

Saurav Sachidanand 57 Dec 8, 2022
crosvm is a virtual machine monitor (VMM) based on Linux’s KVM hypervisor

crosvm - The Chrome OS Virtual Machine Monitor crosvm is a virtual machine monitor (VMM) based on Linux’s KVM hypervisor, with a focus on simplicity,

Google 454 Dec 31, 2022
hy-rs, pronounced high rise, provides a unified and portable to the hypervisor APIs provided by various platforms.

Introduction The hy-rs crate, pronounced as high rise, provides a unified and portable interface to the hypervisor APIs provided by various platforms.

S.J.R. van Schaik 12 Nov 1, 2022
A thin-hypervisor that runs on aarch64 CPUs.

How to build the hypervisor By Rust toolchain (TBD) By docker Requirements Docker (Tested by Docker version 20.10.8, build 3967b7d28e) I tested by non

RIKEN R-CCS 54 Dec 12, 2022
An Intel HAXM powered, protected mode, 32 bit, hypervisor addition calculator, written in Rust.

HyperCalc An Intel HAXM powered, protected mode, 32 bit, hypervisor addition calculator, written in Rust. Purpose None ?? . Mostly just to learn Rust

Michael B. 2 Mar 29, 2022
A simple hypervisor demonstrating the use of the Intel VT-rp (redirect protection) technology.

Hello-VT-rp A simple hypervisor demonstrating the use of the Intel VT-rp (redirect protection) technology. This repository is a complement of the blob

Satoshi Tanda 22 Jun 23, 2023
Trying embedded Rust on the Pinecil GD32VF103 RISC-V device.

Pinecil GD32VF103 RISC-V Rust Demos My personal collection of Rust demos running on the PINE64 Pinecil portable soldering iron, featuring a GD32VF103T

alvinhochun 39 Nov 28, 2022