shavee is a Program to automatically decrypt and mount ZFS datasets using Yubikey HMAC as 2FA or any USB drive with support for PAM to auto mount home directories.

Related tags

Cryptography shavee
Overview

shavee

rust workflow GitHub license Keybase PGP Crates.io

shavee is a simple program to decrypt and mount encrypted ZFS user home directories at login using Yubikey HMAC or a Simple USB drive as 2FA written in rust.

Supported methods

This program currently supports two methods for 2FA:

1. Yubikey

In this mode the program looks for a Yubikey on login and uses it's HMAC mode on SLOT 2 along with your password to derive the final encryption key.

Yubikey mode is set with the -y flag.

NOTE It currently only reads the SLOT 2 of the Yubikey for HMAC.

2. File/USB

In this mode the program looks for a file (can be any file) and use that along with your password to derive the final encryption.

File mode is set using the -f option.

The idea with this method is to keep the file on a USB storage device and present it during the login to derive the encryption key.

You can use any preexisting file.

Note: Since the file becomes part of your encryption key and its Security cannot be guaranteed as with Yubikey you are responsible for keeping it secure.

3. Password only

If no second factor is specified the program will use only password as a single factor.

Build and Install

  1. Install Rust
  2. Clone repo using
git clone https://github.com/ashuio/shavee.git 
  1. Build using
cargo build --release 
  1. Place the binary in your bin directory with
sudo cp target/release/shavee /usr/bin

Modes

  • pam : For use with the pam_exec.so module (Used with the -p flag)
  • create : Creates new Datasets with the derived key (Used with the -c option)

Flags/Options

  • -y : Use Yubikey for 2FA
  • -f : Use any file as 2FA, takes filepath as argument.
  • -p : Enable PAM mode
  • -c : Create ZFS dataset with the derived encryption key
  • -z : if present in conjunction with any of the above options, it will try to unlock and mount the given dataset with the derived key instead of printing it. Takes zfs dataset path as argument. ( Will automatically append username in PAM mode )

NOTE: The -y (Yubikey mode) flag and the -f (File mode) option are interchangeable.

Configure ZFS Datasets

NOTE: If using with PAM your dataset password should be the SAME as your user account password for it to work automatically

NOTE: Remember to update your encryption key as well if you update your password.


You can change/update the key for existing ZFS datasets by running

shavee -c <zfs dataset path>

Example

shavee -y -c zroot/data/home/hunter

Here we use Yubikey as our second factor. (an be omitted for password only auth)

Note: Encryption must already be enabled and the key loaded to change key of an exisiting dataset.

Create a new dataset

To create a dataset with our key we will first creat the dataset normally like

sudo shavee -c <Desired dataset>

Example

sudo shavee -f /mnt/usb/secretfile -c zroot/data/home/hunter

Here we use a FILE for our second factor (Can be omitted for password auth only)

Use shavee to unlock and mount any zfs patition

Simply add the option -z to unlock any zfs dataset

Example

shavee -y -z zroot/data/home/hunter/secrets

Use in Scripts

You can also pipe the password directly to use with scripts

Example

echo "hunter2" | shavee -y -z zroot/data/home/hunter/secrets

Here "hunter2" will be treated as the password

Use USB Drive instead of a Yubikey

You can use the -f option instead of the -y flag to substitute a Yubikey with any USB Drive.

Auto mount the USB so shavee can find the required keyfile on login

We can use udev for this, simply create and add the following to /etc/udev/rules.d/99-usb-automount.rules

", RUN{program}+="/usr/bin/systemd-mount --no-block --automount=yes --collect $devnode " ">
ACTION=="add", SUBSYSTEMS=="usb", SUBSYSTEM=="block", ENV{ID_FS_UUID}=="", RUN{program}+="/usr/bin/systemd-mount --no-block --automount=yes --collect $devnode "

Example

ACTION=="add", SUBSYSTEMS=="usb", SUBSYSTEM=="block", ENV{ID_FS_UUID}=="ADB0-DA9C", RUN{program}+="/usr/bin/systemd-mount --no-block --automount=yes --collect $devnode /media/usb"

Here we're mounting the first partition of the usb disk to /media/usb

You can get the UUID by running

udevadm info --query=all --name=<Target disk> | grep ID_FS_UUID=

Example

udevadm info --query=all --name=/dev/sdb1 | grep ID_FS_UUID=

Run udevadm control --reload-rules after to make sure new rules are loaded.

Use shavee with PAM to auto unlock homedir

This program uses the pam_exec.so module to execute during the login process.

simply add the following line to your desired pam login method file.

In our example we will be adding it to /etc/pam.d/sddm to handle graphical logins and /etc/pam.d/login to handle CLI logins.

Add the following line to you pam config file

auth    optional    pam_exec.so expose_authtok  -p -y -z 

Example

auth    optional    pam_exec.so expose_authtok /usr/bin/shavee -p -y -z zroot/data/home

Where zroot/data/home mounts to /home

Dual home directories in ZFS

Since ZFS mounts datasets OVER preexisting directories and we defined our module in PAM as optional we still get authenticated with JUST the pass even though our dataset is NOT decrypted (eg. Because Yubikey was not inserted).

We can use this to our advantage and essentially have TWO home directories.

First which would be your normal encrypted home directory which would be unlocked and mounted when your Yubikey is present at login.

Second would be the directory which would already be present and would be loaded on decryption failure i.e when no Yubikey is inserted during login.

Let me know if interested and maybe i can write up a more detailed guide.

Comments
  • Unlock via PAM with Yubikey results in

    Unlock via PAM with Yubikey results in "No such file or directory"

    When trying out 0824841, I was unable to get PAM to unlock my home directory with a Yubikey plugged in, as indicated by the example. Adding a log parameter to the pam_exec.so command reveals:

    Error: Failed to run zfs command for rpool/USERDATA/hunter Error: No such file or directory (os error 2)

    I'm on Ubuntu 20.04 with root on ZFS. I created rpool/USERDATA/hunter via shavee -y -c -z rpool/USERDATA/hunter. I manually set the mountpoint to /home/hunter and canmount to noauto (prevents asking for the password on boot). I also set overlay to on to get rid of the warning that the directory is not empty.

    I am able to unlock the directory manually with shavee -y -z rpool/USERDATA/hunter.

    opened by aschaap 14
  • [bug] Command line parse error

    [bug] Command line parse error

    While testing 0.1.4 branch (9089e755a62c9585fe24b129d1e0b87200d8b20d), (most definitely main branch has the same bug too) I noticed that an unexpected error happens with this command line. Only if PAM_USER environment variable is not set.

    $ ./shavee unexpected --pam -z testpool/testdataset
    thread 'main' panicked at 'Panic! Something unexpected happened! Please help by reporting it as a bug.: NotPresent', shavee-bin/src/args.rs:198:55
    

    This seems to be caused by a corner case when the binary is executed in PAM mode while passing an unexpected argument to the binary. Clap fails to reject it correctly.

    It seems that this section causes Clap to replace PAM_USER with unexpected argument, instead of providing error for lack of the existence of PAM_USER environment variable.

    Then here PAM_USER is being used while it doesn't exists and throws a Panic!

    Unit test didn't catch it because it sets PAM_USER environment.

    If the plan is that 0.1.4 branch will have a separate libpam_shavee.so instead of --pam PAM mode, then it should be better to to move PAM_USER validating and appending it to the dataset to this PAM module and remove it from the main binary argument.

    opened by kiavash-at-work 8
  • Refactoring Argument parsing functions

    Refactoring Argument parsing functions

    This is a fast and dirty attempt to refactor as much code as possible to make CLI binary and PAM module arguments use the common functions minimizing the duplicate codes.

    I have only sanity checked it and didn't test against all possible combinations.

    opened by kiavash-at-work 6
  • [Feature Request] generate hash while waiting for user input

    [Feature Request] generate hash while waiting for user input

    Currently, the hash generation happens after user entered password, however these can happen in parallel meaning while it is waiting for the user is to enter password, it spawn a task to generate the hash in background.

    The time saved depends on the speed of user computer and its code execution.

    opened by kiavash-at-work 6
  • [Feature Request] Max Limit for file used for hashing

    [Feature Request] Max Limit for file used for hashing

    Currently, there is no upper for the size of the file using to generate hash. If this file is small, the hash is generated quite fast however as it grows, it takes longer. For example, the program never exits if this command shavee -f /dev/zero is executed, because /dev/zero has no EOF. While this doesn't seem to be a safe usage of the program, it demonstrate a corner case.

    A potential solution is to either hardcode a max size limit or pass it as an optional argument to the program.

    opened by kiavash-at-work 5
  • Odd piece of code

    Odd piece of code

    In main.rs it looks like dataset passwords are being suffixed with "Aveesha". Is this a remnant from testing that should be removed?

    https://github.com/ashuio/shavee/blob/af275d00cef4a55e4c0e67047278c20cee644466/src/main.rs#L28

    opened by marcaddeo 5
  • Unit tests for filehash.rs

    Unit tests for filehash.rs

    This PR contains:

    1. Implementation of unit tests for filehash.rs
      • Some of the tests depends on known locally generated files, they will run sequentially.
        • The test files will be generated by the unit test code,
        • The test files/folder will be deleted at the end of the unit test code.
    2. filehash.rs is refactored to streamline the unit tests and error handling.
    opened by kiavash-at-work 4
  • Unit test for filehash.rs

    Unit test for filehash.rs

    This PR includes:

    • unit tests for filehash.rs functions.
      • tempfile crate was brought into scope only for unit tests [dev-dependencies]. It is used for generating temp folder in the system temp directory. This temp folder will be removed by OS as soon as its variable goes out of scope.
    • removed pub from get_filehash_local() and get_filehash_http_sftp().
      • They are called only by get_filehash() which is already in the scope.
    • refactored the error conversation to String from get_filehash_local() and get_filehash_http_sftp() to get_filehash()
      • This move makes it possible to implement unit tests for get_filehash_local() and get_filehash_http_sftp() that checks and verifies the complete error codes.
      • The code is easier to read and follow because now the Err to String conversation happens only once in each branch of get_filehash().
      • The unit test for get_filehash() only verifies the first 5 character of the error message because get_filehash() returns error in String and to avoid variation in error message implementations by the underlying OS.
    opened by kiavash-at-work 2
  • Code clean up and an example of unit test

    Code clean up and an example of unit test

    This PR contains 3 changes:

    1. Unit test for port_check() function in args.rs.
    2. The fix for port 0 to be rejected as an invalid port.
    3. Code clean up, merging matched arms and removing unnecessary brackets
    opened by kiavash-at-work 2
  • 'size' and parallel hash generation

    'size' and parallel hash generation

    This PR includes the implementation for #9 and #10.

    1. Hash is now generated while user entering password.
    2. Optional SIZE can be passed as CLI argument
    3. Parse 'size' as 'Option'
    4. Updated Plumping logic to pass variables to lib functions
    opened by kiavash-at-work 1
  • Fix for PAM_USER env and more

    Fix for PAM_USER env and more

    This PR contains:

    1. Fix for bug #7
      • NOTE: The associated unit test is still not able to test for this bug and possible regression can happen.
    2. Refactored code to make main() only for entry and exit. This helps in integration test implementation.
    3. Added Option<> support to Sargs and related codes.
    4. Refactored key calculation into a separate function to eliminate repeated codes.
    opened by kiavash-at-work 1
  • Ignore already loaded key

    Ignore already loaded key

    Hi,

    It would by nice to test if key is already load before asking for a password.

    Currently it give an error Error in mounting user ZFS dataset: Key load error: Key already loaded for

    Thanks,

    opened by edillmann 2
  • [TODO] Complete unit test coverage

    [TODO] Complete unit test coverage

    Current missing unit tests coverage as of PR #19 :

    • [ ] Pam Module functions in shavee-pam::lib.rs
    • [ ] All Yubikey related functions
    • [ ] Simple (2~4 lines) functions in shavee-core::logic.rs
    opened by kiavash-at-work 0
  • Unlock ZFS Datasets with Shavee during boot

    Unlock ZFS Datasets with Shavee during boot

    I'm using Shavee to unlock a ZFS dataset as part of my boot process, and thought I'd share how I'm doing it:

    /etc/systemd/system/[email protected]

    [Unit]
    Description=Unlock ZFS Dataset %I with Shavee
    DefaultDependencies=no
    Before=systemd-user-sessions.service
    Before=zfs-mount.service
    After=zfs-import.target
    After=systemd-vconsole-setup.service
    
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/bin/sh -c 'set -eu;keystatus="$$(/sbin/zfs get -H -o value keystatus "%I")";[ "$$keystatus" = "unavailable" ] || exit 0;count=0;while [ $$count -lt 3 ];do  systemd-ask-password --id="zfs:%I"    "Enter passphrase for %I"|    shavee -y -s 1 -z "%I" && exit 0;  count=$$((count + 1));done;exit 1'
    ExecStop=/bin/sh -c 'set -eu;keystatus="$$(/sbin/zfs get -H -o value keystatus "%I")";[ "$$keystatus" = "available" ] || exit 0;/sbin/zfs unload-key "%I"'
    
    [Install]
    WantedBy=zfs-mount.service
    

    I'm using Slot 1 for HMAC challenges on my Yubikey, so you may need to alter the Shavee command if you're using a different slot

    Then just enable the service for your encrypted pool, e.g. to unlock zroot/data you'd do systemctl enable zfs-shavee-unlock@zroot-data

    opened by marcaddeo 8
Releases(0.1.24-alpha)
Owner
Ashutosh Verma
Ashutosh Verma
CLI tool for managing your 2FA authentication codes written in pure Rust.

(O)TP (VA)ULT - ova. ova is a simple CLI tool which lets you manage your TOTPs, or basically lets you get your two-way authentication code straight to

Giorgi Anakidze 3 Apr 28, 2023
Decrypt your LUKS partition using a FIDO2 compatible authenticator

fido2luks This will allow you to unlock your LUKS encrypted disk with an FIDO2 compatible key. Note: This has only been tested under Fedora 31, Ubuntu

null 118 Dec 24, 2022
An easy-to-use CLI tool to recover files from zfs snapshots

zfs-undelete an easy-to-use cli tool to recover files from zfs snapshots Usage Use zfs-undelete <file-to-restore>. Works for file and folders. By defa

null 9 Dec 15, 2022
A simple self-contained CLI tool that makes it easy to efficiently encrypt/decrypt your files.

cryptic A simple self-contained CLI tool that makes it easy to efficiently encrypt/decrypt your files. Contents Features Building Usage License Featur

Arthur Ivanets 5 May 2, 2023
This is my home environment setup for monitoring temperature and humidity

Home EnViroNment Motivation This is my IoT temperature and humidity monitoring solution for where i live. I found it cheaper to go buy sensors and hoo

Fredrik 1 Jan 5, 2022
Yi Token by Crate Protocol: the primitive for auto-compounding single token staking pools.

yi Yi Token by Crate Protocol: the primitive for auto-compounding single token staking pools. About Yi is a Solana primitive for building single-sided

Crate Protocol 12 Apr 7, 2022
The Home Blockchain Of 🦀 Rustaceans

The repo is included Rust syntax, configuration and the goal of creating scratch codes like one is becuase of providing testbed environment of Blockchain.

armanriazi 1 Sep 3, 2022
dWallet Network, a composable modular signature network is the home of dWallets

Welcome to dWallet Network dWallet Network, a composable modular signature network is the home of dWallets. A dWallet is a noncollusive and massively

dWallet Labs 8 Feb 26, 2024
Dfinity's fungible token standard. Any PRs and comments are welcome,collaborate with us to build this standard

Dfinity's fungible token standard. Any PRs and comments are welcome,collaborate with us to build this standard

Deland Labs 46 Nov 7, 2022
Kubernetes controller written in Rust for automatically generating and updating secrets

Kubernetes controller written in Rust for automatically generating and updating secrets

Loc Mai 6 Nov 8, 2022
Automatically assess and score software repositories for supply chain risk.

Hipcheck Hipcheck scores risks for software projects; yours and your dependencies. It analyzes repositories to assess risks, review development practi

The MITRE Corporation 6 Jan 26, 2023
Parity-Bridge — Bridge between any two ethereum-based networks

Deprecated Bridges This repo is deprecated. Originally it contained the ETH <> ETH-PoA bridge (see tumski tag). Later it was repurposed for ETH-PoA <>

Parity Technologies 314 Nov 25, 2022
deductive verification of Rust code. (semi) automatically prove your code satisfies your specifications!

Le marteau-pilon, forges et aciéries de Saint-Chamond, Joseph-Fortuné LAYRAUD, 1889 About Creusot is a tool for deductive verification of Rust code. I

Xavier Denis 609 Dec 28, 2022
Example of a SC coded in RUST that can safely perform any swaps of tokens (NFT, SFT, ESDT, MetaESDT)

Elrond-NFT-Trading Example of a Smart Contract (SC) coded in RUST, that can perform any swaps of tokens (NFT, SFT, ESDT, MetaESDT) The idea P2P swaps

Sia 3 May 17, 2022
Release complex cargo-workspaces automatically with changelog generation, used by `gitoxide`

cargo smart-release Fearlessly release workspace crates and with beautiful semi-handcrafted changelogs. Key Features zero-configuration cargo smart-re

Sebastian Thiel 24 Oct 11, 2023
Library with support for de/serialization, parsing and executing on data-structures and network messages related to Bitcoin

Rust Bitcoin Library with support for de/serialization, parsing and executing on data-structures and network messages related to Bitcoin. Heads up for

Rust Bitcoin Community 1.3k Dec 29, 2022
⬆ A program for deploying and upgrading programs.

DeployDAO Migrator WARNING: This code is a work in progress. Please do not use it as is. A program for deploying and upgrading programs. About The Mig

Deploy DAO 28 May 28, 2022
Marinde Anchor-Based, first on mainnet, liquid-staking-program and mSOL->SOL swap pool

marinade-anchor Marinade-finance liquid staking program for the Solana blockchain Audits & Code Review Kudelski Security: https://marinade.finance/Kud

Marinade.Finance 42 Dec 11, 2022
Making composability with the Zeta DEX a breeze, FuZe provides CPI interfaces and sample implementations for on-chain program integration.

Zeta FuZe ?? Zeta FuZe FuZe is Zeta's cross-program integration ecosystem. This repository contains the Zeta Cross Program Invocation (CPI) interface

Zeta 39 Aug 27, 2022