Network simulation in Rust

Overview

netsim - A Rust library for network simulation and testing (currently linux-only).

netsim is a crate for simulating networks for the sake of testing network-oriented Rust code. You can use it to run Rust functions in network-isolated containers, and assemble virtual networks for these functions to communicate over.

Spawning threads into isolated network namespaces

Network namespaces are a linux feature which can provide a thread or process with its own view of the system's network interfaces and routing table. This crate's spawn module provides the new_namespace function for spawning threads into their own network namespaces. In this demonstration we list the visible network interfaces using the get_if_addrs crate.

extern crate netsim;
extern crate get_if_addrs;
extern crate tokio_core;
use netsim::spawn;
use tokio_core::reactor::Core;
use get_if_addrs::get_if_addrs;

// First, check that there is more than one network interface. This will generally be true
// since there will at least be the loopback interface.
let interfaces = get_if_addrs().unwrap();
assert!(interfaces.len() > 0);

// Now check how many network interfaces we can see inside a fresh network namespace. There
// should be zero.
let spawn_complete = spawn::new_namespace(|| {
    get_if_addrs().unwrap()
});
let mut core = Core::new().unwrap();
let interfaces = core.run(spawn_complete).unwrap();
assert!(interfaces.is_empty());

This demonstrates how to launch a thread - perhaps running an automated test - into a sandboxed environment. However an environment with no network interfaces is pretty useless...

Creating virtual interfaces

We can create virtual IP and Ethernet interfaces using the types in the iface module. For example, IpIface lets you create a new IP (TUN) interface and implements futures::{Stream, Sink} so that you can read/write raw packets to it.

extern crate netsim;
extern crate tokio_core;
extern crate futures;

use std::net::Ipv4Addr;
use tokio_core::reactor::Core;
use futures::{Future, Stream};
use netsim::iface::IpIfaceBuilder;
use netsim::spawn;

let mut core = Core::new().unwrap();
let handle = core.handle();

// Create a network interface named "netsim"
// Note: This will likely fail with "permission denied" unless we run it in a fresh network
// environment
let iface = {
    IpIfaceBuilder::new()
    .name("netsim")
    .ipv4_addr(Ipv4Addr::new(192, 168, 0, 24), 24)
    .build(&handle)
    .unwrap()
};

// Read the first `Ipv4Packet` sent from the interface.
let packet = core.run({
    iface
    .into_future()
    .map_err(|(e, _)| e)
    .map(|(packet_opt, _)| packet_opt.unwrap())
}).unwrap();

However, for simply testing network code, you don't need to create interfaces manually like this.

Sandboxing network code

Rather than performing the above two steps individually, you can use the spawn::ipv4_tree function along with the node module to set up a namespace with an IPv4 interface for you.

extern crate netsim;
extern crate tokio_core;
extern crate futures;

use std::net::UdpSocket;
use tokio_core::reactor::Core;
use futures::{Future, Stream};
use netsim::{spawn, node, Network, Ipv4Range};
use netsim::wire::Ipv4Payload;

// Create an event loop and a network to bind devices to.
let mut core = Core::new().unwrap();
let network = Network::new(&core.handle());
let handle = network.handle();

// Spawn a network with a single node - a machine with an IPv4 interface in the 10.0.0.0/8
// range, running the given callback.
let (spawn_complete, ipv4_plug) = spawn::ipv4_tree(
    &handle,
    Ipv4Range::local_subnet_10(),
    node::ipv4::machine(|ipv4_addr| {
        // Send a packet out the interface
        let socket = UdpSocket::bind("0.0.0.0:0").unwrap();
        socket.send_to(b"hello world", "10.1.2.3:4567").unwrap();
    }),
);

let (packet_tx, packet_rx) = ipv4_plug.split();

// Inspect the packet sent out the interface.
core.run({
    packet_rx
    .into_future()
    .map(|(packet_opt, _)| {
        let packet = packet_opt.unwrap();
        match packet.payload() {
            Ipv4Payload::Udp(udp) => {
                assert_eq!(&udp.payload()[..], &b"hello world"[..]);
            },
            _ => panic!("unexpected payload"),
        }
    })
}).unwrap()

Simulating networks of communicating nodes

Using the spawn and node modules you can set up a bunch of nodes connected over a virtual network.

extern crate tokio_core;
extern crate future_utils;
extern crate netsim;

use std::net::UdpSocket;
use tokio_core::reactor::Core;
use netsim::{spawn, node, Network, Ipv4Range};

// Create an event loop and a network to bind devices to.
let mut core = Core::new().unwrap();
let network = Network::new(&core.handle());
let handle = network.handle();

let (tx, rx) = std::sync::mpsc::channel();

// Create a machine which will receive a UDP packet and return its contents
let receiver_node = node::ipv4::machine(move |ipv4_addr| {
    let socket = UdpSocket::bind(("0.0.0.0", 1234)).unwrap();
    /// Tell the sending node our IP address
    tx.send(ipv4_addr).unwrap();
    let mut buffer = [0; 1024];
    let (n, _sender_addr) = socket.recv_from(&mut buffer).unwrap();
    buffer[..n].to_owned()
});

// Create the machine which will send the UDP packet
let sender_node = node::ipv4::machine(move |_ipv4_addr| {
    let receiver_ip = rx.recv().unwrap();
    let socket = UdpSocket::bind("0.0.0.0:0").unwrap();
    socket.send_to(b"hello world", (receiver_ip, 1234)).unwrap();
});

// Connect the sending and receiving nodes via a router
let router_node = node::ipv4::router((receiver_node, sender_node));

// Run the network with the router as the top-most node. `_plug` could be used send/receive
// packets from/to outside the network
let (spawn_complete, _plug) = spawn::ipv4_tree(&handle, Ipv4Range::global(), router_node);

// Drive the network on the event loop and get the data returned by the receiving node.
let (received, ()) = core.run(spawn_complete).unwrap();
assert_eq!(&received[..], b"hello world");

All the rest

It's possible to set up more complicated (non-hierarchical) network topologies, ethernet networks, namespaces with multiple interfaces etc. by directly using the primitives in the device module. Have an explore of the API, and if anything needs clarification or could be better designed then let us know on the bug tracker :)

Dependencies

netsim only runs on Linux as it makes use of the Linux namespaces APIs. netsim depends on the POSIX capabilities library, usually called libcap-dev or libcap-devel on most distros.

Testing

netsim has it's own unit/integration tests. There are different ways to run these tests depending on the environment you're in.

If you're on a Linux host machine, you can just use cargo test like normal:

$ cargo test

If you're inside a Linux container, say running tests on travis CI, the Linux namespace APIs probably won't be available. In this case you need to disable the linux_host feature of this crate:

$ cargo test --no-default-features

License

This library is dual-licensed under the Modified BSD (LICENSE-BSD https://opensource.org/licenses/BSD-3-Clause) or the MIT license (LICENSE-MIT https://opensource.org/licenses/MIT) at your option.

Comments
  • Update rust to 2018 edition

    Update rust to 2018 edition

    Makes builds simpler cause we don't have to track the Rust versions netsim builds on. Now it's supposed to build on all latest stable Rust 2018 edition versions.

    opened by povilasb 2
  • Why not allow Ipv4Range have a single IP?

    Why not allow Ipv4Range have a single IP?

    Say I have this code snippet:

    #[macro_use]
    extern crate net_literals;
    extern crate netsim;
    extern crate tokio_core;
    #[macro_use]
    extern crate unwrap;
    
    use netsim::{node, spawn, Ipv4Range, Network};
    use tokio_core::reactor::Core;
    
    fn main() {
        let mut evloop = unwrap!(Core::new());
        let network = Network::new(&evloop.handle());
    
        let server_recipe = node::ipv4::machine(|ip| {
            println!("[server] ip = {}", ip);
        });
        let (spawn_complete, _ipv4_plug) =
            spawn::ipv4_tree(&network.handle(), Ipv4Range::new(ipv4!("78.100.10.1"), 31), server_recipe);
    
        unwrap!(evloop.run(spawn_complete));
    }
    

    Which is not doing much, just spawning a single IPv4 machine. In such case maybe I'd like to assign a specific address to my machine. But if you run this code, netsim panics because of https://github.com/canndrew/netsim/blob/master/src/range/v4.rs#L112 Is there any particular reason we don't want to allow this behavior?

    Static IPs also might be useful when we connect 2 nodes with router, then we don't need to exchange the IP addressess - less code in tests :)

    opened by povilasb 2
  • Update tokio

    Update tokio

    netsim still uses an old version of Tokio (before Tokio reform). That requires me to depend on multiple versions of Tokio. I'll attempt to revive the new-tokio branch and update to the latest tokio version.

    opened by povilasb 1
  • NAT tests

    NAT tests

    I was playing around with different simulated NAT types to see what kind of port allocations are used with specific NAT types. As a result I came up with a couple of integration tests. So I thought that might add some additional value in documenting and testing the public API.

    opened by povilasb 1
  • Netowork::spawn methods

    Netowork::spawn methods

    This PR adds helper methods Network::spawn_ipvx_tree(). So the API is currently

        let network = Network::new(&evloop.handle());
        let node_recipe = node::ipv4::machine(|ip| ());
        let (spawn_complete, _ipv4_plug) = network.spawn_ipv4_tree(Ipv4Range::default, node_recipe);
    

    instead of

        let network = Network::new(&evloop.handle());
        let node_recipe = node::ipv4::machine(|ip| ());
        let (spawn_complete, _ipv4_plug) = spawn::ipv4_tree(&network.handle(), Ipv4Range::default, node_recipe);
    

    Which seems to be more intuitive to me. See if you like it :)

    opened by povilasb 1
  • CI scripts

    CI scripts

    Add CI scripts and fix clippy isues so that travis would pass :) Note that at this point CI will fail though. We also need to exclude linux namespace related tests.

    opened by povilasb 1
  • Too easy to leak devices

    Too easy to leak devices

    After the user creates a bunch of devices and spawns them to the event loop there is no longer any way to destroy them and, in some circumstances, they will not destroy themselves automatically. Devices should probably be scoped.

    opened by canndrew 1
  • The API is too confusing and redundant

    The API is too confusing and redundant

    Having separate spawn, device and node APIs isn't really necessary. The spawn API can probably be removed and merged into the device and node modules.

    opened by canndrew 1
  • Leaks threads

    Leaks threads

    When cargo test finishes, threads that are created by netsim can linger on in the background, consuming resources. They should automatically be killed when the parent process dies.

    opened by canndrew 1
  • Refactoring

    Refactoring

    I'm planning to implement the packet broadcasting. My guess is that it should be implemented in router. So I did some code refactoring and added tests to router to make future changes safer.

    opened by povilasb 0
  • Update libc

    Update libc

    Latest mio version (0.6.16) depends on libc >= 0.2.42. I've tried upgrading libc to 0.2.43 which is the latest at a given moment. Unfortunately, some tests failed. But I noticed that tests pass with 0.2.42 so didn't do proper research what caused tests failure since this version satisfy my needs at this moment :)

    opened by povilasb 0
  • maintainance

    maintainance

    The codebase seems quite dated, some maintainance tasks after a quick look would be:

    • [ ] update dependencies
    • [ ] migrate to new futures
    • [ ] support async-std
    • [ ] use thiserror
    opened by dvc94ch 2
  • Can't run my own Tokio reactor in netsim

    Can't run my own Tokio reactor in netsim

    So with the latest tokio integrated (https://github.com/canndrew/netsim/pull/25) now it won't allow to spawn another tokio reactor when netsim is running. So smth like this is not possible:

    use futures::future;
    use netsim::{Network, node, Ipv4Range};
    use tokio::runtime::current_thread::Runtime;
    use unwrap::unwrap;
    
    fn main() {
        let network = Network::new();
        let network_handle = network.handle();
    
        let node1 = node::ipv4::machine(move |ip| {
            println!("node1: {}", ip);
    
            let fut = future::ok::<_, ()>(());
            let mut evloop = unwrap!(Runtime::new());
            unwrap!(evloop.block_on(fut));
    
            future::ok(())
        });
    
        let fut = future::lazy(move || {
            let (spawn_complete, _ip_plug) = network_handle.spawn_ipv4_tree(Ipv4Range::global(), node1);
            spawn_complete
        });
        let mut evloop = unwrap!(Runtime::new());
        unwrap!(evloop.block_on(fut));
    }
    

    and results with an error:

    thread '<unnamed>' panicked at 'Multiple executors at once: EnterError { reason: "attempted to run an executor while another executor is already running" }', src/libcore/result.rs:997:5
    

    In most cases when we have futures code this should not be a problem. Although, I can see 2 cases where it will be:

    1. when we have a piece of code that internally runs tokio reactor;
    2. when we have futures that do not implement Send trait.

    The first thing we could do here is at least document the situation. Another thing to consider is to provide a netsim machine node that runs on a forked process instead of a thread.

    opened by povilasb 0
  • Only runs on Linux

    Only runs on Linux

    netsim is pretty heavily tied to linux-only APIs for creating namespaces. I need to find out what APIs (if any) exist on other platforms for doing the same thing. Finding out how docker is implemented on other platforms might be a good start.

    opened by canndrew 0
  • Doesn't work on TravisCI

    Doesn't work on TravisCI

    The clone call which creates the network namespace fails on Travis with permission denied, even though it supposedly shouldn't need any special permissions on any recent version of linux. This needs to be investigated.

    opened by canndrew 0
Owner
Andrew Cann
Andrew Cann
A Rust library for parsing the SOME/IP network protocol (without payload interpretation).

someip_parse A Rust library for parsing the SOME/IP network protocol (without payload interpretation). Usage Add the following to your Cargo.toml: [de

Julian Schmid 18 Oct 31, 2022
Common Rust Lightning Network types

Common Rust Lightning Network types Warning: while in a good state, this is still considered a preview version! There are some planned changes. This l

Martin Habovštiak 5 Nov 8, 2022
Network Block Storage server, written in Rust. Supports pluggable and chainable underlying storage

nbd-rs Disclaimer DO NEVER USE THIS FOR PRODUCTION Do not use this for any data that you cannot afford to lose any moment. Expect data loss, corruptio

Rainlab Inc 10 Sep 30, 2022
A set of Rust crates for interacting with the Matrix chat network.

Ruma – Your home in Matrix. A set of Rust crates (libraries) for interacting with the Matrix chat network. website • chat • documentation (unstable) G

Ruma 441 Dec 26, 2022
Reliable p2p network connections in Rust with NAT traversal

Reliable p2p network connections in Rust with NAT traversal. One of the most needed libraries for any server-less / decentralised projects

MaidSafe-Archive 948 Dec 20, 2022
Reliable p2p network connections in Rust with NAT traversal

Reliable p2p network connections in Rust with NAT traversal. One of the most needed libraries for any server-less, decentralised project.

MaidSafe-Archive 948 Dec 20, 2022
The netns-rs crate provides an ultra-simple interface for handling network namespaces in Rust.

netns-rs The netns-rs crate provides an ultra-simple interface for handling network namespaces in Rust. Changing namespaces requires elevated privileg

OpenAnolis Community 7 Dec 15, 2022
A private network system that uses WireGuard under the hood.

innernet A private network system that uses WireGuard under the hood. See the announcement blog post for a longer-winded explanation. innernet is simi

Tonari, Inc 4.1k Dec 29, 2022
A Curve-like AMM for Secret Network

A Curve-like AMM for Secret Network. Supports a varibale number of tokens with the same underlying value.

Enigma 16 Dec 11, 2022
A multi-protocol network relay

A multi-protocol network relay

zephyr 43 Dec 13, 2022
Computational Component of Polkadot Network

Gear is a new Polkadot/Kusama parachain and most advanced L2 smart-contract engine allowing anyone to launch any dApp for networks with untrusted code.

null 145 Dec 19, 2022
Fullstack development framework for UTXO-based dapps on Nervos Network

Trampoline-rs The framework for building powerful dApps on the number one UTXO chain, Nervos Network CKB. This is an early-stage, currently very incom

TannrA 2 Mar 25, 2022
Official Implementation of Findora Network.

Findora Platform Wiki Contribution Guide Licensing The primary license for Platform is the Business Source License 1.1 (BUSL-1.1), see LICENSE. Except

Findora Foundation 61 Dec 9, 2022
Simple in-network file transfer with barely any overhead.

fftp fftp is the "Fast File Transport Protocol". It transfers files quickly between computers on a network with low overhead. Motivation FTP uses two

leo 4 May 12, 2022
netavark: A container network stack

netavark: A container network stack Netavark is a rust based network stack for containers. It is being designed to work with Podman but is also applic

Containers 230 Jan 2, 2023
A cross-platform, user-space WireGuard port-forwarder that requires no system network configurations.

Cross-platform, user-space WireGuard port-forwarder that requires no system network configurations.

Aram Peres 629 Jan 4, 2023
An implementation of the CESS network supported by CESS LAB.

--------- ?? ---------An infrastructure of decentralized cloud data network built with Substrate-------- ?? -------- ---------------- ?? -------------

Cess Project 249 Dec 26, 2022
A small utility to wake computers up or put them to sleep over the local network

WKSL - a wake and sleep utility An experiment in writing a small CLI utility in Rust. The program lets you wake a machine on your local network up fro

Henrik Ravn 0 Nov 14, 2021
Private swaps for Secret Network using a private entropy pool & differential privacy.

WIP SecretSwap: Anon Edition Private swaps for Secret Network! Uses private entropy pool for differential privacy when reporting pools sizes. Swap amo

SCRT Labs 5 Apr 5, 2022