A private network system that uses WireGuard under the hood.

Overview

innernet

A private network system that uses WireGuard under the hood. See the announcement blog post for a longer-winded explanation.

innernet is similar in its goals to Slack's nebula or Tailscale, but takes a bit of a different approach. It aims to take advantage of existing networking concepts like CIDRs and the security properties of WireGuard to turn your computer's basic IP networking into more powerful ACL primitives.

innernet is not an official WireGuard project, and WireGuard is a registered trademark of Jason A. Donenfeld.

This has not received an independent security audit, and should be considered experimental software at this early point in its lifetime.

Usage

Server Creation

Every innernet network needs a coordination server to manage peers and provide endpoint information so peers can contact each other. Create a new one with

sudo innernet-server new

The init wizard will ask you questions about your network and give you some reasonable defaults. It's good to familiarize yourself with network CIDRs as a lot of innernet's access control is based upon them. As an example, let's say the root CIDR for this network is 10.60.0.0/16. Server initialization creates a special "infra" CIDR which contains the innernet server itself and is reachable from all CIDRs on the network.

Next we'll also create a humans CIDR where we can start adding some peers.

sudo innernet-server add-cidr <interface>

For the parent CIDR, you can simply choose your network's root CIDR. The name will be humans, and the CIDR will be 10.60.64.0/24 (not a great example unless you only want to support 256 humans, but it works for now...).

By default, peers which exist in this new CIDR will only be able to contact peers in the same CIDR, and the special "infra" CIDR which was created when the server was initialized.

A typical workflow for creating a new network is to create an admin peer from the innernet-server CLI, and then continue using that admin peer via the innernet client CLI to add any further peers or network CIDRs.

sudo innernet-server add-peer <interface>

Select the humans CIDR, and the CLI will automatically suggest the next available IP address. Any name is fine, just answer "yes" when asked if you would like to make the peer an admin. The process of adding a peer results in an invitation file. This file contains just enough information for the new peer to contact the innernet server and redeem its invitation. It should be transferred securely to the new peer, and it can only be used once to initialize the peer.

You can run the server with innernet-server serve , or if you're on Linux and want to run it via systemctl, run systemctl enable --now innernet-server@. If you're on a home network, don't forget to configure port forwarding to the Listen Port you specified when creating the innernet server.

Peer Initialization

Let's assume the invitation file generated in the steps above have been transferred to the machine a network admin will be using.

You can initialize the client with

sudo inn install /path/to/invitation.toml

You can customize the network name if you want to, or leave it at the default. innernet will then connect to the innernet server via WireGuard, generate a new key pair, and register that pair with the server. The private key in the invitation file can no longer be used.

If everything was successful, the new peer is on the network. You can run things like

sudo inn list

or

sudo inn list --tree

to view the current network and all CIDRs visible to this peer.

Since we created an admin peer, we can also add new peers and CIDRs from this peer via innernet instead of having to always run commands on the server.

Adding Associations between CIDRs

In order for peers from one CIDR to be able to contact peers in another CIDR, those two CIDRs must be "associated" with each other.

With the admin peer we created above, let's add a new CIDR for some theoretical CI servers we have.

sudo inn add-cidr <interface>

The name is ci-servers and the CIDR is 10.60.64.0/24, but for this example it can be anything.

For now, we want peers in the humans CIDR to be able to access peers in the ci-servers CIDR.

sudo inn add-association <interface>

The CLI will ask you to select the two CIDRs you want to associate. That's all it takes to allow peers in two different CIDRs to communicate!

You can verify the association with

sudo inn list-associations <interface>

and associations can be deleted with

sudo inn delete-associations <interface>

Enabling/Disabling Peers

For security reasons, IP addresses cannot be re-used by new peers, and therefore peers cannot be deleted. However, they can be disabled. Disabled peers will not show up in the list of peers when fetching the config for an interface.

Disable a peer with

sudo inn disable-peer 

Or re-enable a peer with

sudo inn enable-peer 

Specifying a Manual Endpoint

The innernet server will try to use the internet endpoint it sees from a peer so other peers can connect to that peer as well. This doesn't always work and you may want to set an endpoint explicitly. To set an endpoint, use

sudo inn override-endpoint <interface>

You can go back to automatic endpoint discovery with

sudo inn override-endpoint -u <interface>

Setting the Local WireGuard Listen Port

If you want to change the port which WireGuard listens on, use

sudo inn set-listen-port <interface>

or unset the port and use a randomized port with

sudo innernet set-listen-port -u <interface>

Installation

innernet has only officially been tested on Linux and MacOS, but we hope to support as many platforms as is feasible!

Runtime Dependencies

It's assumed that WireGuard is installed on your system, either via the kernel module in Linux 5.6 and later, or via the wireguard-go userspace implementation.

WireGuard Installation Instructions

If you're not already a WireGuard user, you may need to load the kernel module:

modprobe wireguard

You can make the kernel module loading persistent with:

echo wireguard > /etc/modules-load.d/wireguard.conf

Arch

yay -S innernet

Ubuntu

Fetch the appropriate .deb packages from https://github.com/tonarino/innernet/releases and install with

sudo apt install ./innernet*.deb

macOS

./macos/install.sh

Development

innernet-server Build dependencies

Build:

cargo build --release --bin innernet-server

The resulting binary will be located at ./target/release/innernet-server

innernet Client CLI Build dependencies

Build:

cargo build --release --bin innernet

The resulting binary will be located at ./target/release/innernet

Releases

  1. Run cargo release [--dry-run] [minor|major|patch|...] to automatically bump the crates appropriately.
  2. Create a new git tag (ex. v0.6.0).
  3. Push (with tags) to the repo.

innernet uses GitHub Actions to automatically produce a debian package for the releases page.

Comments
  • Segfaults

    Segfaults

    It looks like since a recent inn fetch, we've started getting segmentation faults on the innernet clients as well as the innernet server.

    Tried version 1.5.1 as well as a downgraded version 1.4.0.

    Not quite sure where to start reviewing this, my suspicion would be a corrupted database but since it's happening on several hosts it seems like something that transmits through inn fetch? It isn't consistent and doesn't happen on all hosts. I guess a debug build would be helpful to get some details on the segfault?

    opened by linuskendall 35
  • Please don't edit /etc/hosts

    Please don't edit /etc/hosts

    Thank you for this wonderful project, this sounds exactly like what I wanted (something like Tailscale, but without a proprietary server part, in Rust, using the in-kernel WireGuard module! 🎉)

    There's one thing stopping me from adopting Innernet immediately: from the glance at the client source code, it appears that Innernet edits /etc/hosts in order to enable .wg names to work.

    In my book, that's very wrong. /etc/hosts is the configuration written and maintained by me, the system administrator, not by any software.

    Off the top of my head, the two better ways to add your hostnames to the system are:

    • make a glibc NSS module (like nss-mymachines)
    • run a special DNS server on the WireGuard interface; tell the rest of the system (i.e. with resolvconf or systemd-resolved API) to use it for the wg domain.
    opened by bugaevc 18
  • fetch timed out

    fetch timed out

    I have a weird issue, that started soon after I setup several nodes in my innernet.

    I have a setup with 8 nodes, 2 of which are behind NAT1, 5 are behind NAT2, and 1 server node that has a public static ip address. I set up my innernet-server, then joined an admin node (gellert) that is behind NAT1, from which I setup all other nodes. I have two cidr subnetworks, and issue have started after add-association between from two cidr subnetworks (perhaps it is unrelated, though).

    My inn list --tree looks like this:

    interface: jenya (...)
      listening_port: 42500
      ip: 10.42.1.2
      10.42.0.0/16 jenya
        10.42.0.1/32 innernet-server
        | 10.42.0.1 innernet-server
        10.42.1.0/24 devices
        | 10.42.1.2 albus
        | 10.42.1.1 gellert
        | 10.42.1.3 pi
        10.42.2.0/24 others
        | 10.42.2.2 dobby
        | 10.42.2.1 lupin
        | 10.42.2.3 ron
        | 10.42.2.4 severus
    

    The gellert and pi are located behind NAT1, the other hosts are located behind NAT2.

    The issue is: the inn fetch throws an error:

    inn fetch jenya
    [*] fetching state from server.
    
    [ERROR] http://10.42.0.1:51820/v1/user/state: Network Error: Network Error: Error encountered in the status line: timed out reading response
    

    This happens from both gellert and pi nodes. The inn fetch functions correctly from all other nodes. From gellert and pi I can ping the server 10.42.0.1, with a response of ~35ms. From albus I can ping the 10.42.0.1 with a response of ~9ms.

    From gellert and pi I cannot ping anything else except the server. From albus I can reach devices all devices except gellert and pi.

    Firewalls are temporary disabled on gellert, pi and the innernet-server.

    opened by esovetkin 13
  • Compiling v1.2.0 on CentOS 7 :: linking with `cc` failed

    Compiling v1.2.0 on CentOS 7 :: linking with `cc` failed

    I'm trying to compile the client binary on CentOS 7 with cargo install --path client, but I'm running into below error when compiling wgctrl-sys.

    error: linking with `cc` failed: exit code: 1
    

    The full error can be seen here: https://gist.githubusercontent.com/Kerwood/623f446f03447659721cf97861ec56a1/raw/0fd33a7a2d038cb7562e9ec7f7ee3c6b59ac0432/gistfile1.txt

    Besides Rust, I've installed clang-devel sqlite-devel and llvm-devel.

    Am I missing something ?

    opened by Kerwood 11
  • innernet-server deb should depend on and modprobe wireguard

    innernet-server deb should depend on and modprobe wireguard

    I needed to modprobe wireguard (and apt install wireguard) in order for innernet-server serve to work.

    Not sure what the most morally correct way is to do this persistently. I wound up doing echo wireguard > /etc/modules-load.d/wireguard.conf.

    opened by mrdomino 11
  • IPv6 NAT candidate reporting

    IPv6 NAT candidate reporting

    This is part of the output when I run innernet show on my device. These devices(arch, router, home-server) all have a ipv4 address (a private IP address, like 192.168.1.1), and a ipv6 address(a global unicast address). But it seems that only the ipv4 one is being used in innernet. If the ipv6 address can be used to connect peers, we can simply get rid of annoying NATs, and connect to peers directly through ipv6.

    So is it possible to use ipv6 addresses in innernet?

    I've been using innernet for a while, it is really a great project! Many thanks for your great efforts.

     peer: arch (2t9yDWwyTY...)
        ip: 10.42.1.1
        endpoint: 192.168.147.80:60688
        last handshake: 19 minutes, 18 seconds ago
        transfer: 3.08 MiB received, 7.34 MiB sent
      peer: router (hLu6TevaY5...)
        ip: 10.42.1.3
        endpoint: 192.168.1.1:55552
        transfer: 0 B received, 1418.57 KiB sent
      peer: home-server (VhItncIvyG...)
        ip: 10.42.1.4
        endpoint: 192.168.1.7:33498
        last handshake: 1 minute, 43 seconds ago
        transfer: 36.27 KiB received, 133.89 KiB sent
    
    opened by Enter-tainer 10
  • Dynamic Peers

    Dynamic Peers

    We want to use innernet as a vpn for our platform cloud infrastructure. Some parts of this infrastructure need to be spawned on demand, when the workload is high.

    Is "on demand joining" with innernet even possible given that you cannot reuse a peers IP? I was thinking about writing a small authenticated service that creates new invitations when requested.

    Or do I need to look at some other tooling to do the job?

    opened by maaft 10
  • Innernet client not reporting the public IP to the server for NAT traversal

    Innernet client not reporting the public IP to the server for NAT traversal

    Hello,

    When I list the peers in my network, the endpoint that is reported is the private IP, not the public IP. I also checked the database on the server and couldn't find any info about the peer's public IP.

    Those peers are NATed and I thought the client would try to detect the public IP and report it to the server (or the server use the seen IP) so that all peers could connect to each other using NAT holepunching. Am I misunderstanding something here? Right now my setup consists of a server and 2 clients. All was working well as long as the clients were on the same LAN, however now I just tried with one client on an other LAN and it doesn't work anymore.

    What's the intended behavior and how do debug this issue? I'm using the latest version: 1.5.2 on Linux

    Thank you, Boris

    opened by fersingb 9
  • Innernet server not reachable to redeem invite

    Innernet server not reachable to redeem invite

    I ran into a weird issue trying to set up innernet earlier. Let me just run through the steps I took.

    I set up (IPv6) CIDRs and created a peer invite on my server, then ran innernet-server serve which successfully set up Wireguard there.

    I then moved the invite to my laptop and tried innernet install invite.toml. This times on out "Registering keypair on server." To rule out firewall issues I restarted the install but killed it before it timed out, which keeps the Wireguard connection in place. Then, on the server I ran tcpdump on the Wireguard interface.

    What I see is that if I curl to the innernet server, the server receives the TCP SYN from Wireguard, but never responds with an ACK, so my curl retransmits for a bit then gives up. Running curl on port 80 and 443 where I have an nginx server running through Wireguard does yield a result, and the packet capture shows a normal TCP handshake followed by HTTP data.

    This suggests to me that the issue is with the innernet API server specifically, but even RUST_LOG=debug stays silent when I try to connect, so I'm kind of stuck. Any idea what I could do to narrow down what's causing this?

    opened by ArdaXi 8
  • override-endpoint doesn't always work

    override-endpoint doesn't always work

    In one environment (Digital Ocean) I've been able to override endpoints to the peers' external IP addresses successfully; in another (AWS), the command appears to run fine, but the endpoints remain the instances' internal addresses. I've tried different combinations of fetching and restarting the connection, but no luck. Do you have any recommendations for either order of operations, or means for diagnosis?

    I'm running 1.5.0 on all the peers/servers in question.

    opened by bensteinberg 8
  • Build error for innernet binary on AlpineLinux

    Build error for innernet binary on AlpineLinux

    I tried to build the innernet client binary on Alpine Linux with stable-x86_64-unknown-linux-musl as the default toolchain for Rust. There is an error while building wgctrl-sys I cannot solve:

    $ cargo build --release --bin innernet Compiling serde v1.0.126 Compiling wgctrl-sys v1.4.0-beta.3 (/home/sysadmin/tmp/innernet-1.4.0-beta.3/wgctrl-sys) error: failed to run custom build command for wgctrl-sys v1.4.0-beta.3 (/home/sysadmin/tmp/innernet-1.4.0-beta.3/wgctrl-sys)

    Caused by: process didn't exit successfully: /home/sysadmin/tmp/innernet-1.4.0-beta.3/target/release/build/wgctrl-sys-4aed27b106087c18/build-script-build (signal: 11, SIGSEGV: invalid memory reference) warning: build failed, waiting for other jobs to finish... error: build failed

    Any hint where to search?

    Thanks, fuller

    opened by fuller0815 8
  • `innernet fetch` often results in a timeout

    `innernet fetch` often results in a timeout

    On MacOS. I'll run a fetch which times out. I then run the command again and it succeeds quickly.

    brian ~ $ sudo innernet fetch <network_name>
    [*] fetching state for <network_name> from server...
    
    [E] http://10.42.0.1:51820/v1/user/state: Network Error: timed out reading response
    
    brian ~ $ sudo innernet fetch <network_name>
    [*] fetching state for <network_name> from server...
    <some peers got updated>
    [*] updating /etc/hosts with the latest peers.
    
    [*] updated interface <network_name>
    
    [*] reporting 1 interface address as NAT traversal candidates
    [*] Attempting to establish connection with 24 remaining unconnected peers...
    [*] Attempting to establish connection with 6 remaining unconnected peers...
    [*] Attempting to establish connection with 2 remaining unconnected peers...
    [*] Attempting to establish connection with 1 remaining unconnected peers...
    
    brian ~ $
    

    Possibly related to #163

    opened by bschwind 0
  • Innernet writes /etc/hosts when only the order of mappings in a section has changed, not the mappings themselves

    Innernet writes /etc/hosts when only the order of mappings in a section has changed, not the mappings themselves

    Currently /etc/hosts is written very frequently, nearly every minute, because innernet writes the contents of each section as it is received. Apparently those checkins, or whatever is triggering HostsBuilder to run, is receiving the list of hosts in the order they respond which is not always the same. This order is carried through to the order mappings appear in the hosts file.

    Would it be possible to sort the list of hosts in each section before checking to see if /etc/hosts requires updating? This would, I think, change the behavior so that updating /etc/hosts only happens when there is actually an ip/hostname update required and make it much less frequent.

    Thanks!

    opened by kevenwyld 3
  • Add Error Message when `wireguard-go` not found.

    Add Error Message when `wireguard-go` not found.

    Previously when trying to add a peer, i for the following output, which was not helpful since i wasn't sure what file could not be found.

    bkn@mbp16 bkn % sudo /Users/bkn/.cargo/bin/innernet  --verbose  install mbp16.toml
    ✔ Interface name · chester
    [*] bringing up interface chester.
    [E] failed to start the interface: chester - No such file or directory (os error 2).
    [*] bringing down the interface.
    [!] failed to bring down interface: chester - WireGuard name file can't be read.
    [E] Failed to redeem invite. Now's a good time to make sure the server is started and accessible!
    
    [E] chester - No such file or directory (os error 2)
    

    After poking through the code I finally figured out the No such file or directory error was due to no wireguard implementation could be found. Now when running the above command i get the following output.

    bkn@mbp16 bkn % sudo /Users/bkn/source/rust/innernet/target/debug/innernet  --verbose  install mbp16.toml
    ✔ Interface name · chester
    [*] bringing up interface chester.
    [E] failed to start the interface: chester - Cannot find wireguard implementation 'wireguard-go', to specificy custom wireguard implementation set $WG_USERSPACE_IMPLEMENTATION to wiregaurd binary.
    [*] bringing down the interface.
    [!] failed to bring down interface: chester - WireGuard name file can't be read.
    [E] Failed to redeem invite. Now's a good time to make sure the server is started and accessible!
    
    [E] chester - Cannot find wireguard implementation 'wireguard-go', to specificy custom wireguard implementation set $WG_USERSPACE_IMPLEMENTATION to wiregaurd binary
    

    I find this error message to be more explicit and if a user encounters a similar problem its more clear how to remidy this specific error.

    opened by noyez 1
  • Loosing Connections

    Loosing Connections

    Hi,

    I'm facing some weird connection issues.

    My setup is the following:

    • 1 devicee behind an LTE router: clientA
    • 1 server on some cloud service A clientB
    • 1 server on some other cloud service B: server (runs innernet-server)

    server has ufw configured:

    26346                         ALLOW       Anywhere                  
    Anywhere                   ALLOW       10.5.0.0/16               
    Anywhere                   ALLOW       172.0.0.0/8               
    51820                         ALLOW       Anywhere                    
    Anywhere                   ALLOW       10.42.0.0/16              
    51820/udp                  ALLOW       Anywhere                  
    26346 (v6)                  ALLOW       Anywhere (v6)             
    51820 (v6)                  ALLOW       Anywhere (v6)             
    51820/udp (v6)           ALLOW       Anywhere (v6) 
    

    10.42.0.0/16 is my innernet network.

    clientB (innernet client) and server (innernet server) can see each other all the time.

    But clientA is loosing the connection once about every 12 hours and can't seem to reconnect.

    innernet fetch <network> also does not help

    When I do the following steps on server:

    1. ufw disable
    2. innernet fech
    3. ufw enable

    The connection between clientA and server can be reestablished.

    The connection will stay active for some time (usually multiple hours) until its getting lost again.

    I suspect that the LTE router changes access points often. Could this be the case here? What has this to do with my ufw config?

    Any help or ideas to debug this are appreciated.

    opened by maaft 2
  • Something like tailscale's

    Something like tailscale's "magicdns"?

    Networking is not my specialty, sorry if this is stupid, obvious, etc. A built in DNS solution would be interesting, like tailscale's magic dns, but maybe less magic and more better?

    L1: It assigns each peer it's own DNS entry, so you can do dave:8000 to connect to whatever IP dave has been assigned. Much easier that way. L2: Would be even better if dave could name his port, so I could connect to webserver.dave for 8000 L3: Dreaming, what if two IP's could both be associated with one DNS entry, and each one can claim ports, so if I have two homelab servers for instance, one has claimed nextcloud.lab as port 3333 and the other has claimed plex.lab as 1234 L4: DNS entries can also be unrelated to peers, so nextcloud could simply map to lab:3333, how whimsical is that?

    Past L1, tailscale gets beaten. I'm not looking for something magic like theirs, but powerful and flexible like the rest of this wonderful program. Unless it was not clear, these L's are not demands by any means, I'm just very excited about this idea and tossing around ideas :)

    Perhaps there is some alternative program I can use that is like this?

    opened by boehs 8
Releases(v1.5.5)
Private swaps for Secret Network using a private entropy pool & differential privacy.

WIP SecretSwap: Anon Edition Private swaps for Secret Network! Uses private entropy pool for differential privacy when reporting pools sizes. Swap amo

SCRT Labs 5 Apr 5, 2022
A cross-platform, user-space WireGuard port-forwarder that requires no system network configurations.

Cross-platform, user-space WireGuard port-forwarder that requires no system network configurations.

Aram Peres 629 Jan 4, 2023
Rosenpass is a formally verified, post-quantum secure VPN that uses WireGuard to transport the actual data.

Rosenpass README This repository contains A description of the Rosenpass protocol The reference implementation of the protocol – the rosenpass tool A

Rosenpass 597 Mar 19, 2023
User-space Wireguard gateway allowing sharing network connection from environment where usual routing rules are inaccessible.

wgslirpy A command line tool (and a Rust library) for accepting incoming connections within a Wireguard link and routing them to external network usin

Vitaly Shukela 4 Aug 21, 2023
First iteration of gluing our modules together into a PoA private network

First iteration of gluing our modules together into a PoA private network

null 7 Oct 29, 2022
A Prometheus exporter for WireGuard

wireguard_exporter An asynchronous Prometheus exporter for wireguard wireguard_exporter runs wg show [..] and scrapes the output to build Prometheus m

Kevin K. 15 Dec 29, 2022
A WireGuard UWP VPN plugin.

WireGuard UWP A Universal Windows Platform (UWP) VPN Plug-in for WireGuard® written in Rust. Windows provides a plug-in based model for adding 3rd-par

Luqman Aden 92 Dec 13, 2022
WireGuard front for mitmproxy (WIP)

mitmguard work-in-progress WireGuard front for mitmproxy Architecture DONE multi-threaded / asynchronous WireGuard server using tokio: one worker thre

Fabio Valentini 23 Jan 5, 2023
WireGuard frontend for mitmproxy (WIP)

mitmproxy_wireguard Transparently proxy any device that can be configured as a WireGuard client! Work-In-Progress. Architecture DONE multi-threaded /

Fabio Valentini 20 Dec 29, 2022
Rust utility crate for parsing, encoding and generating x25519 keys used by WireGuard

WireGuard Keys This is a utility crate for parsing, encoding and generating x25519 keys that are used by WireGuard. It exports custom types that can b

Fractal Networks 3 Aug 9, 2022
WireGuard gateway with SNI for portable connectivity.

Gateway This is a daemon that controls gateway servers. Gateway servers are servers that fulfil three major purposes: facilitating connectivity betwee

Fractal Networks 5 Aug 9, 2022
Docker containers on a synthetic network. Run applications in a context that lets you manipulate their network conditions.

Synthetic Network Docker containers on a synthetic network. Run applications in a context that lets you manipulate their network conditions. Dependenc

Daily 58 Dec 15, 2022
Grow Rust is a Growtopia Private Server made in Rust

Grow Rust is a Growtopia Private Server made in Rust

null 14 Dec 7, 2022
Acts as an IRC server and a nostr client. Connect with your IRC client using your nostr private key as the password.

nostr-irc Acts as an IRC server and a nostr client. Connect with your IRC client using your nostr private key as the password. Experimental code, use

null 11 Dec 26, 2022
Tachyon is a performant and highly parallel reliable udp library that uses a nack based model

Tachyon Tachyon is a performant and highly parallel reliable udp library that uses a nack based model. Strongly reliable Reliable fragmentation Ordere

Chris Ochs 47 Oct 15, 2022
Network simulation in Rust

netsim - A Rust library for network simulation and testing (currently linux-only). netsim is a crate for simulating networks for the sake of testing n

Andrew Cann 115 Dec 15, 2022
A Curve-like AMM for Secret Network

A Curve-like AMM for Secret Network. Supports a varibale number of tokens with the same underlying value.

Enigma 16 Dec 11, 2022
A multi-protocol network relay

A multi-protocol network relay

zephyr 43 Dec 13, 2022
A Rust library for parsing the SOME/IP network protocol (without payload interpretation).

someip_parse A Rust library for parsing the SOME/IP network protocol (without payload interpretation). Usage Add the following to your Cargo.toml: [de

Julian Schmid 18 Oct 31, 2022