A fast, simple, recursive content discovery tool written in Rust.

Overview


feroxbuster

A simple, fast, recursive content discovery tool written in Rust

github downloads

demo

πŸ¦€ Releases ✨ Example Usage ✨ Contributing ✨ Documentation πŸ¦€

πŸ˜• What the heck is a ferox anyway?

Ferox is short for Ferric Oxide. Ferric Oxide, simply put, is rust. The name rustbuster was taken, so I decided on a variation. 🀷

πŸ€” What's it do tho?

feroxbuster is a tool designed to perform Forced Browsing.

Forced browsing is an attack where the aim is to enumerate and access resources that are not referenced by the web application, but are still accessible by an attacker.

feroxbuster uses brute force combined with a wordlist to search for unlinked content in target directories. These resources may store sensitive information about web applications and operational systems, such as source code, credentials, internal network addressing, etc...

This attack is also known as Predictable Resource Location, File Enumeration, Directory Enumeration, and Resource Enumeration.

πŸ“– Table of Contents

πŸ’Ώ Installation

Download a Release

Releases for multiple architectures can be found in the Releases section. The latest release for each of the following systems can be downloaded and executed as shown below.

Linux (32 and 64-bit) & MacOS

curl -sL https://raw.githubusercontent.com/epi052/feroxbuster/master/install-nix.sh | bash

Windows x86

https://github.com/epi052/feroxbuster/releases/latest/download/x86-windows-feroxbuster.exe.zip
Expand-Archive .\feroxbuster.zip
.\feroxbuster\feroxbuster.exe -V

Windows x86_64

Invoke-WebRequest https://github.com/epi052/feroxbuster/releases/latest/download/x86_64-windows-feroxbuster.exe.zip -OutFile feroxbuster.zip
Expand-Archive .\feroxbuster.zip
.\feroxbuster\feroxbuster.exe -V

Snap Install

Install using snap

sudo snap install feroxbuster

The only gotcha here is that the snap package can only read wordlists from a few specific locations. There are a few possible solutions, of which two are shown below.

If the wordlist is on the same partition as your home directory, it can be hard-linked into ~/snap/feroxbuster/common

ln /path/to/the/wordlist ~/snap/feroxbuster/common
./feroxbuster -u http://localhost -w ~/snap/feroxbuster/common/wordlist

If the wordlist is on a separate partition, hard-linking won't work. You'll need to copy it into the snap directory.

cp /path/to/the/wordlist ~/snap/feroxbuster/common
./feroxbuster -u http://localhost -w ~/snap/feroxbuster/common/wordlist

Homebrew on MacOS and Linux

Install using Homebrew via tap

🍏 MacOS

brew tap tgotwig/feroxbuster
brew install feroxbuster

🐧 Linux

brew tap tgotwig/linux-feroxbuster
brew install feroxbuster

Cargo Install

feroxbuster is published on crates.io, making it easy to install if you already have rust installed on your system.

cargo install feroxbuster

apt Install

Download feroxbuster_amd64.deb from the Releases section. After that, use your favorite package manager to install the .deb.

curl -sLO https://github.com/epi052/feroxbuster/releases/latest/download/feroxbuster_amd64.deb.zip
unzip feroxbuster_amd64.deb.zip
sudo apt install ./feroxbuster_*_amd64.deb

AUR Install

Install feroxbuster-git on Arch Linux with your AUR helper of choice:

yay -S feroxbuster-git

Docker Install

The following steps assume you have docker installed / setup

First, clone the repository.

git clone https://github.com/epi052/feroxbuster.git
cd feroxbuster

Next, build the image.

sudo docker build -t feroxbuster .

After that, you should be able to use docker run to perform scans with feroxbuster.

Basic usage

sudo docker run --init -it feroxbuster -u http://example.com -x js,html

Piping from stdin and proxying all requests through socks5 proxy

cat targets.txt | sudo docker run --net=host --init -i feroxbuster --stdin -x js,html --proxy socks5://127.0.0.1:9050

Mount a volume to pass in ferox-config.toml

You've got some options available if you want to pass in a config file. ferox-buster.toml can live in multiple locations and still be valid, so it's up to you how you'd like to pass it in. Below are a few valid examples:

sudo docker run --init -v $(pwd)/ferox-config.toml:/etc/feroxbuster/ferox-config.toml -it feroxbuster -u http://example.com
sudo docker run --init -v ~/.config/feroxbuster:/root/.config/feroxbuster -it feroxbuster -u http://example.com

Note: If you are on a SELinux enforced system, you will need to pass the :Z attribute also.

docker run --init -v (pwd)/ferox-config.toml:/etc/feroxbuster/ferox-config.toml:Z -it feroxbuster -u http://example.com

Define an alias for simplicity

alias feroxbuster="sudo docker run --init -v ~/.config/feroxbuster:/root/.config/feroxbuster -i feroxbuster"

βš™οΈ Configuration

Default Values

Configuration begins with with the following built-in default values baked into the binary:

  • timeout: 7 seconds
  • follow redirects: false
  • wordlist: /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
  • threads: 50
  • verbosity: 0 (no logging enabled)
  • scan_limit: 0 (no limit imposed on concurrent scans)
  • rate_limit: 0 (no limit imposed on requests per second)
  • status_codes: 200 204 301 302 307 308 401 403 405
  • user_agent: feroxbuster/VERSION
  • recursion depth: 4
  • auto-filter wildcards - true
  • output: stdout
  • save_state: true (create a state file in cwd when Ctrl+C is received)

Threads and Connection Limits At A High-Level

This section explains how the -t and -L options work together to determine the overall aggressiveness of a scan. The combination of the two values set by these options determines how hard your target will get hit and to some extent also determines how many resources will be consumed on your local machine.

A Note on Green Threads

feroxbuster uses so-called green threads as opposed to traditional kernel/OS threads. This means (at a high-level) that the threads are implemented entirely in userspace, within a single running process. As a result, a scan with 30 green threads will appear to the OS to be a single process with no additional light-weight processes associated with it as far as the kernel is concerned. As such, there will not be any impact to process (nproc) limits when specifying larger values for -t. However, these threads will still consume file descriptors, so you will need to ensure that you have a suitable nlimit set when scaling up the amount of threads. More detailed documentation on setting appropriate nlimit values can be found in the No File Descriptors Available section of the FAQ

Threads and Connection Limits: The Implementation

  • Threads: The -t option specifies the maximum amount of active threads per-directory during a scan
  • Connection Limits: The -L option specifies the maximum amount of active connections per thread

Threads and Connection Limits: Examples

To truly have only 30 active requests to a site at any given time, -t 30 -L 1 is necessary. Using -t 30 -L 2 will result in a maximum of 60 total requests being processed at any given time for that site. And so on. For a conversation on this, please see Issue #126 which may provide more (or less) clarity πŸ˜‰

ferox-config.toml

After setting built-in default values, any values defined in a ferox-config.toml config file will override the built-in defaults.

feroxbuster searches for ferox-config.toml in the following locations (in the order shown):

  • /etc/feroxbuster/ (global)
  • CONFIG_DIR/feroxbuster/ (per-user)
  • The same directory as the feroxbuster executable (per-user)
  • The user's current working directory (per-target)

CONFIG_DIR is defined as the following:

  • Linux: $XDG_CONFIG_HOME or $HOME/.config i.e. /home/bob/.config
  • MacOs: $HOME/Library/Application Support i.e. /Users/bob/Library/Application Support
  • Windows: {FOLDERID_RoamingAppData} i.e. C:\Users\Bob\AppData\Roaming

If more than one valid configuration file is found, each one overwrites the values found previously.

If no configuration file is found, nothing happens at this stage.

As an example, let's say that we prefer to use a different wordlist as our default when scanning; we can set the wordlist value in the config file to override the baked-in default.

Notes of interest:

  • it's ok to only specify values you want to change without specifying anything else
  • variable names in ferox-config.toml must match their command-line counterpart
# ferox-config.toml

wordlist = "/wordlists/jhaddix/all.txt"

A pre-made configuration file with examples of all available settings can be found in ferox-config.toml.example.

# ferox-config.toml
# Example configuration for feroxbuster
#
# If you wish to provide persistent settings to feroxbuster, rename this file to ferox-config.toml and make sure
# it resides in the same directory as the feroxbuster binary.
#
# After that, uncomment any line to override the default value provided by the binary itself.
#
# Any setting used here can be overridden by the corresponding command line option/argument
#
# wordlist = "/wordlists/jhaddix/all.txt"
# status_codes = [200, 500]
# filter_status = [301]
# threads = 1
# parallel = 2
# timeout = 5
# auto_tune = true
# auto_bail = true
# proxy = "http://127.0.0.1:8080"
# replay_proxy = "http://127.0.0.1:8081"
# replay_codes = [200, 302]
# verbosity = 1
# scan_limit = 6
# rate_limit = 250
# quiet = true
# silent = true
# json = true
# output = "/targets/ellingson_mineral_company/gibson.txt"
# debug_log = "/var/log/find-the-derp.log"
# user_agent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"
# redirects = true
# insecure = true
# extensions = ["php", "html"]
# no_recursion = true
# add_slash = true
# stdin = true
# dont_filter = true
# extract_links = true
# depth = 1
# filter_size = [5174]
# filter_regex = ["^ignore me$"]
# filter_similar = ["https://somesite.com/soft404"]
# filter_word_count = [993]
# filter_line_count = [35, 36]
# queries = [["name","value"], ["rick", "astley"]]
# save_state = false
# time_limit = 10m

# headers can be specified on multiple lines or as an inline table
#
# inline example
# headers = {"stuff" = "things"}
#
# multi-line example
#   note: if multi-line is used, all key/value pairs under it belong to the headers table until the next table
#         is found or the end of the file is reached
#
# [headers]
# stuff = "things"
# more = "headers"

Command Line Parsing

Finally, after parsing the available config file, any options/arguments given on the commandline will override any values that were set as a built-in or config-file value.

USAGE:
    feroxbuster [FLAGS] [OPTIONS] --url <URL>...

FLAGS:
    -f, --add-slash        Append / to each request
        --auto-bail        Automatically stop scanning when an excessive amount of errors are encountered
        --auto-tune        Automatically lower scan rate when an excessive amount of errors are encountered
    -D, --dont-filter      Don't auto-filter wildcard responses
    -e, --extract-links    Extract links from response body (html, javascript, etc...); make new requests based on
                           findings (default: false)
    -h, --help             Prints help information
    -k, --insecure         Disables TLS certificate validation
        --json             Emit JSON logs to --output and --debug-log instead of normal text
    -n, --no-recursion     Do not scan recursively
    -q, --quiet            Hide progress bars and banner (good for tmux windows w/ notifications)
    -r, --redirects        Follow redirects
        --silent           Only print URLs + turn off logging (good for piping a list of urls to other commands)
        --stdin            Read url(s) from STDIN
    -V, --version          Prints version information
    -v, --verbosity        Increase verbosity level (use -vv or more for greater effect. [CAUTION] 4 -v's is probably
                           too much)

OPTIONS:
        --debug-log <FILE>                        Output file to write log entries (use w/ --json for JSON entries)
    -d, --depth <RECURSION_DEPTH>
            Maximum recursion depth, a depth of 0 is infinite recursion (default: 4)

    -x, --extensions <FILE_EXTENSION>...          File extension(s) to search for (ex: -x php -x pdf js)
    -N, --filter-lines <LINES>...                 Filter out messages of a particular line count (ex: -N 20 -N 31,30)
    -X, --filter-regex <REGEX>...
            Filter out messages via regular expression matching on the response's body (ex: -X '^ignore me$')

        --filter-similar-to <UNWANTED_PAGE>...
            Filter out pages that are similar to the given page (ex. --filter-similar-to http://site.xyz/soft404)

    -S, --filter-size <SIZE>...                   Filter out messages of a particular size (ex: -S 5120 -S 4927,1970)
    -C, --filter-status <STATUS_CODE>...          Filter out status codes (deny list) (ex: -C 200 -C 401)
    -W, --filter-words <WORDS>...                 Filter out messages of a particular word count (ex: -W 312 -W 91,82)
    -H, --headers <HEADER>...                     Specify HTTP headers (ex: -H Header:val 'stuff: things')
    -o, --output <FILE>                           Output file to write results to (use w/ --json for JSON entries)
        --parallel <PARALLEL_SCANS>
            Run parallel feroxbuster instances (one child process per url passed via stdin)

    -p, --proxy <PROXY>
            Proxy to use for requests (ex: http(s)://host:port, socks5(h)://host:port)

    -Q, --query <QUERY>...                        Specify URL query parameters (ex: -Q token=stuff -Q secret=key)
        --rate-limit <RATE_LIMIT>
            Limit number of requests per second (per directory) (default: 0, i.e. no limit)

    -R, --replay-codes <REPLAY_CODE>...
            Status Codes to send through a Replay Proxy when found (default: --status-codes value)

    -P, --replay-proxy <REPLAY_PROXY>
            Send only unfiltered requests through a Replay Proxy, instead of all requests

        --resume-from <STATE_FILE>
            State file from which to resume a partially complete scan (ex. --resume-from ferox-1606586780.state)

    -L, --scan-limit <SCAN_LIMIT>                 Limit total number of concurrent scans (default: 0, i.e. no limit)
    -s, --status-codes <STATUS_CODE>...
            Status Codes to include (allow list) (default: 200 204 301 302 307 308 401 403 405)

    -t, --threads <THREADS>                       Number of concurrent threads (default: 50)
        --time-limit <TIME_SPEC>                  Limit total run time of all scans (ex: --time-limit 10m)
    -T, --timeout <SECONDS>                       Number of seconds before a request times out (default: 7)
    -u, --url <URL>...                            The target URL(s) (required, unless --stdin used)
    -a, --user-agent <USER_AGENT>                 Sets the User-Agent (default: feroxbuster/VERSION)
    -w, --wordlist <FILE>                         Path to the wordlist

πŸ“Š Scan's Display Explained

feroxbuster attempts to be intuitive and easy to understand, however, if you are wondering about any of the scan's output and what it means, this is the section for you!

Discovered Resource

When feroxbuster finds a response that you haven't filtered out, it's reported above the progress bars and looks similar to what's pictured below.

The number of lines, words, and bytes shown here can be used to filter those responses

response-bar-explained

Overall Scan Progress Bar

The top progress bar, colored yellow, tracks the overall scan status. Its fields are described in the image below.

total-bar-explained

Directory Scan Progress Bar

All other progress bars, colored cyan, represent a scan of one particular directory and will look similar to what's below.

dir-scan-bar-explained

🧰 Example Usage

Multiple Values

Options that take multiple values are very flexible. Consider the following ways of specifying extensions:

./feroxbuster -u http://127.1 -x pdf -x js,html -x php txt json,docx

The command above adds .pdf, .js, .html, .php, .txt, .json, and .docx to each url

All of the methods above (multiple flags, space separated, comma separated, etc...) are valid and interchangeable. The same goes for urls, headers, status codes, queries, and size filters.

Include Headers

./feroxbuster -u http://127.1 -H Accept:application/json "Authorization: Bearer {token}"

IPv6, non-recursive scan with INFO-level logging enabled

./feroxbuster -u http://[::1] --no-recursion -vv

Read urls from STDIN; pipe only resulting urls out to another tool

cat targets | ./feroxbuster --stdin --silent -s 200 301 302 --redirects -x js | fff -s 200 -o js-files

Proxy traffic through Burp

./feroxbuster -u http://127.1 --insecure --proxy http://127.0.0.1:8080

Proxy traffic through a SOCKS proxy (including DNS lookups)

./feroxbuster -u http://127.1 --proxy socks5h://127.0.0.1:9050

Pass auth token via query parameter

./feroxbuster -u http://127.1 --query token=0123456789ABCDEF

Extract Links from Response Body (New in v1.1.0)

Search through the body of valid responses (html, javascript, etc...) for additional endpoints to scan. This turns feroxbuster into a hybrid that looks for both linked and unlinked content.

Example request/response with --extract-links enabled:

  • Make request to http://example.com/index.html
  • Receive, and read in, the body of the response
  • Search the body for absolute and relative links (i.e. homepage/assets/img/icons/handshake.svg)
  • Add the following directories for recursive scanning:
    • http://example.com/homepage
    • http://example.com/homepage/assets
    • http://example.com/homepage/assets/img
    • http://example.com/homepage/assets/img/icons
  • Make a single request to http://example.com/homepage/assets/img/icons/handshake.svg
./feroxbuster -u http://127.1 --extract-links

Here's a comparison of a wordlist-only scan vs --extract-links using Feline from Hack the Box:

Wordlist only

normal-scan-cmp-extract

With --extract-links

extract-scan-cmp-normal

Limit Total Number of Concurrent Scans (new in v1.2.0)

Limit the number of scans permitted to run at any given time. Recursion will still identify new directories, but newly discovered directories can only begin scanning when the total number of active scans drops below the value passed to --scan-limit.

./feroxbuster -u http://127.1 --scan-limit 2

limit-demo

Filter Response by Status Code (new in v1.3.0)

Version 1.3.0 included an overhaul to the filtering system which will allow for a wide array of filters to be added with minimal effort. The first such filter is a Status Code Filter. As responses come back from the scanned server, each one is checked against a list of known filters and either displayed or not according to which filters are set.

./feroxbuster -u http://127.1 --filter-status 301

Pause an Active Scan (new in v1.4.0)

NOTE: v1.12.0 added an interactive menu to the pause/resume functionality. Active scans can still be paused, however, now you're presented with the option to cancel a scan instead of simply seeing a spinner.

Scans can be paused and resumed by pressing the ENTER key (shown below, please see v1.12.0's entry for the latest visual representation)

Replay Responses to a Proxy based on Status Code (new in v1.5.0)

The --replay-proxy and --replay-codes options were added as a way to only send a select few responses to a proxy. This is in stark contrast to --proxy which proxies EVERY request.

Imagine you only care about proxying responses that have either the status code 200 or 302 (or you just don't want to clutter up your Burp history). These two options will allow you to fine-tune what gets proxied and what doesn't.

./feroxbuster -u http://127.1 --replay-proxy http://localhost:8080 --replay-codes 200 302 --insecure

Of note: this means that for every response that matches your replay criteria, you'll end up sending the request that generated that response a second time. Depending on the target and your engagement terms (if any), it may not make sense from a traffic generated perspective.

replay-proxy-demo

Filter Response by Word Count & Line Count (new in v1.6.0)

In addition to filtering on the size of a response, version 1.6.0 added the ability to filter out responses based on the number of lines and/or words contained within the response body. This change drove a change to the information displayed to the user as well. This section will detail the new information and how to make use of it with the new filters provided.

Example output:

200        10l        212w       38437c https://example-site.com/index.html

There are five columns of output above:

  • column 1: status code - can be filtered with -C|--filter-status
  • column 2: number of lines - can be filtered with -N|--filter-lines
  • column 3: number of words - can be filtered with -W|--filter-words
  • column 4: number of bytes (overall size) - can be filtered with -S|--filter-size
  • column 5: url to discovered resource

Filter Response Using a Regular Expression (new in v1.8.0)

Version 1.3.0 included an overhaul to the filtering system which will allow for a wide array of filters to be added with minimal effort. The latest addition is a Regular Expression Filter. As responses come back from the scanned server, the body of the response is checked against the filter's regular expression. If the expression is found in the body, then that response is filtered out.

NOTE: Using regular expressions to filter large responses or many regular expressions may negatively impact performance.

./feroxbuster -u http://127.1 --filter-regex '[aA]ccess [dD]enied.?' --output results.txt --json

Stop and Resume Scans (--resume-from FILE) (new in v1.9.0)

Version 1.9.0 adds a few features that allow for completely stopping a scan, and resuming that same scan from a file on disk.

A simple Ctrl+C during a scan will create a file that contains information about the scan that was cancelled.

save-state

// example snippet of state file

{
  "scans": [
    {
      "id": "057016a14769414aac9a7a62707598cb",
      "url": "https://localhost.com",
      "scan_type": "Directory",
      "complete": true
    },
    {
      "id": "400b2323a16f43468a04ffcbbeba34c6",
      "url": "https://localhost.com/css",
      "scan_type": "Directory",
      "complete": false
    }
  ],
  "config": {
    "wordlist": "/wordlists/seclists/Discovery/Web-Content/common.txt",
    "...": "..."
  },
  "responses": [
    {
      "type": "response",
      "url": "https://localhost.com/Login",
      "path": "/Login",
      "wildcard": false,
      "status": 302,
      "content_length": 0,
      "line_count": 0,
      "word_count": 0,
      "headers": {
        "content-length": "0",
        "server": "nginx/1.16.1"
      }
    }
  ]
},

Based on the example image above, the same scan can be resumed by using feroxbuster --resume-from ferox-http_localhost-1606947491.state. Directories that were already complete are not rescanned, however partially complete scans are started from the beginning.

resumed-scan

In order to prevent state file creation when Ctrl+C is pressed, you can simply add the entry below to your ferox-config.toml.

# ferox-config.toml

save_state = false

Enforce a Time Limit on Your Scan (new in v1.10.0)

Version 1.10.0 adds the ability to set a maximum runtime, or time limit, on your scan. The usage is pretty simple: a number followed directly by a single character representing seconds, minutes, hours, or days. feroxbuster refers to this combination as a time_spec.

Examples of possible time_specs:

  • 30s - 30 seconds
  • 20m - 20 minutes
  • 1h - 1 hour
  • 1d - 1 day (why??)

A valid time_spec can be passed to --time-limit in order to force a shutdown after the given time has elapsed.

time-limit

Extract Links from robots.txt (New in v1.10.2)

In addition to extracting links from the response body, using --extract-links makes a request to /robots.txt and examines all Allow and Disallow entries. Directory entries are added to the scan queue, while file entries are requested and then reported if appropriate.

Filter Response by Similarity to A Given Page (fuzzy filter) (new in v1.11.0)

Version 1.11.0 adds the ability to specify an example page for filtering pages that are similar to the given example.

For example, consider a site that attempts to redirect new users to a /register endpoint. The /register page has a CSRF token that alters the page's response slightly with each new request (sometimes affecting overall length). This means that a simple line/word/char filter won't be able to filter all responses. In order to filter those redirects out, one could use a command like this:

./feroxbuster -u https://somesite.xyz --filter-similar-to https://somesite.xyz/register

--filter-similar-to requests the page passed to it via CLI (https://somesite.xyz/register), after which it hashes the response body using the SSDeep algorithm. All subsequent pages are hashed and compared to the original request's hash. If the comparison of the two hashes meets a certain percentage of similarity (currently 95%), then that request will be filtered out.

SSDeep was selected as it does a good job of identifying near-duplicate pages once content-length reaches a certain size, while remaining performant. Other algorithms were tested but resulted in huge performance hits (orders of magnitude slower on requests/second).

NOTE

  • SSDeep/--filter-similar-to does not do well at detecting similarity of very small responses
    • The lack of accuracy with very small responses is considered a fair trade-off for not negatively impacting performance
  • Using a bunch of --filter-similar-to values may negatively impact performance

Cancel a Recursive Scan Interactively (new in v1.12.0)

Version 1.12.0 expanded the pause/resume functionality introduced in v1.4.0 by adding an interactive menu from which currently running recursive scans can be cancelled, without affecting the overall scan. Scans can still be paused indefinitely by pressing ENTER, however, the

Scans that are started via -u or passed in through --stdin cannot be cancelled, only scans found via --extract-links or recursion are eligible.

Below is an example of the Scan Cancel Menuβ„’.

cancel-menu

Using the menu is pretty simple:

  • Press ENTER to view the menu
  • Choose a scan to cancel by entering its scan index (1)
    • more than one scan can be selected by using a comma-separated list (1,2,3 ... etc)
  • Confirm selections, after which all non-cancelled scans will resume

Here is a short demonstration of cancelling two in-progress scans found via recursion.

cancel-scan

Limit Number of Requests per Second (Rate Limiting) (new in v2.0.0)

Version 2.0.0 added the ability to limit the number of requests per second. One thing to note is that the limit is enforced on a per-directory basis.

Limit number of requests per second, per directory, to 100 (requests per second will increase by 100 for each active directory found during recursion)

./feroxbuster -u http://localhost --rate-limit 100

Limit number of requests per second to 100 to the target as a whole (only one directory at a time will be scanned, thus limiting the number of requests per second overall)

./feroxbuster -u http://localhost --rate-limit 100 --scan-limit 1

rate-limit

Silence all Output or Be Kinda Quiet (new in v2.0.0)

Version 2.0.0 introduces --silent which is almost equivalent to version 1.x.x's --quiet.

--silent

Good for piping a list of urls to other commands:

  • disables logging (no error messages to screen)
  • don't print banner
  • only display urls during scan

example output:

https://localhost.com/contact
https://localhost.com/about
https://localhost.com/terms

--quiet

Good for tmux windows that have notifications enabled as the only updates shown by the scan are new valid responses and new directories found that are suitable for recursion.

  • hide progress bars
  • don't print banner

example output:

302        0l        0w        0c https://localhost.com/Login
200      126l      281w     4091c https://localhost.com/maintenance
200      126l      281w     4092c https://localhost.com/terms
... more individual entries, followed by the directories being scanned ...
Scanning: https://localhost.com
Scanning: https://localhost.com/homepage
Scanning: https://localhost.com/api

Auto-tune or Auto-bail from scans (new in v2.1.0)

Version 2.1.0 introduces the --auto-tune and --auto-bail flags. You can think of these flags as Policies. Both actions (tuning and bailing) are triggered by the same criteria (below). Policies are only enforced after at least 50 requests have been made (or # of threads, if that's > 50).

Policy Enforcement Criteria:

  • number of general errors (timeouts, etc) is higher than half the number of threads (or at least 25 if threads are lower) (per directory scanned)
  • 90% of responses are 403|Forbidden (per directory scanned)
  • 30% of requests are 429|Too Many Requests (per directory scanned)

both demo gifs below use --timeout to overload a single-threaded python web server and elicit timeouts

--auto-tune

The AutoTune policy enforces a rate limit on individual directory scans when one of the criteria above is met. The rate limit self-adjusts every (timeout / 2) seconds. If the number of errors have increased during that time, the allowed rate of requests is lowered. On the other hand, if the number of errors hasn't moved, the allowed rate of requests is increased. If no additional errors are found after a certain number of checks, the rate limit will be removed completely.

auto-tune

--auto-bail

The AutoBail policy aborts individual directory scans when one of the criteria above is met. They just stop getting scanned, no muss, no fuss.

auto-bail

Run Scans in Parallel (new in v2.2.0)

Version 2.2.0 introduces the --parallel option. If you're one of those people who use feroxbuster to scan 100s of hosts at a time, this is the option for you! --parallel spawns a child process per target passed in over stdin (recursive directories are still async within each child).

The number of parallel scans is limited to whatever you pass to --parallel. When one child finishes its scan, the next child will be spawned.

Unfortunately, using --parallel limits terminal output such that only discovered URLs are shown. No amount of -v's will help you here. I imagine this isn't too big of a deal, as folks that need --parallel probably aren't sitting there watching the output... πŸ™ƒ

Example Command:

cat large-target-list | ./feroxbuster --stdin --parallel 10 --extract-links --auto-bail

Resuling Process List (illustrative):

feroxbuster --stdin --parallel 10
 \_ feroxbuster --silent --extract-links --auto-bail -u https://target-one
 \_ feroxbuster --silent --extract-links --auto-bail -u https://target-two
 \_ feroxbuster --silent --extract-links --auto-bail -u https://target-three
 \_ ...
 \_ feroxbuster --silent --extract-links --auto-bail -u https://target-ten

🧐 Comparison w/ Similar Tools

There are quite a few similar tools for forced browsing/content discovery. Burp Suite Pro, Dirb, Dirbuster, etc... However, in my opinion, there are two that set the standard: gobuster and ffuf. Both are mature, feature-rich, and all-around incredible tools to use.

So, why would you ever want to use feroxbuster over ffuf/gobuster? In most cases, you probably won't. ffuf in particular can do the vast majority of things that feroxbuster can, while still offering boatloads more functionality. Here are a few of the use-cases in which feroxbuster may be a better fit:

  • You want a simple tool usage experience
  • You want to be able to run your content discovery as part of some crazy 12 command unix pipeline extravaganza
  • You want to scan through a SOCKS proxy
  • You want auto-filtering of Wildcard responses by default
  • You want an integrated link extractor/robots.txt parser to increase discovered endpoints
  • You want recursion along with some other thing mentioned above (ffuf also does recursion)
  • You want a configuration file option for overriding built-in default values for your scans
feroxbuster gobuster ffuf
fast βœ” βœ” βœ”
allows recursion βœ” βœ”
can specify query parameters βœ” βœ”
SOCKS proxy support βœ” βœ”
multiple target scan (via stdin or multiple -u) βœ” βœ”
configuration file for default value override βœ” βœ”
can accept urls via STDIN as part of a pipeline βœ” βœ”
can accept wordlists via STDIN βœ” βœ”
filter based on response size, wordcount, and linecount βœ” βœ”
auto-filter wildcard responses βœ” βœ”
performs other scans (vhost, dns, etc) βœ” βœ”
time delay / rate limiting βœ” βœ”
extracts links from response body to increase scan coverage (v1.1.0) βœ”
limit number of concurrent recursive scans (v1.2.0) βœ”
filter out responses by status code (v1.3.0) βœ” βœ” βœ”
interactive pause and resume of active scan (v1.4.0) βœ”
replay only matched requests to a proxy (v1.5.0) βœ” βœ”
filter out responses by line & word count (v1.6.0) βœ” βœ”
json output (ffuf supports other formats as well) (v1.7.0) βœ” βœ”
filter out responses by regular expression (v1.8.0) βœ” βœ”
save scan's state to disk (can pick up where it left off) (v1.9.0) βœ”
maximum run time limit (v1.10.0) βœ” βœ”
use robots.txt to increase scan coverage (v1.10.2) βœ”
use example page's response to fuzzily filter similar pages (v1.11.0) βœ”
cancel a recursive scan interactively (v1.12.0) βœ”
limit number of requests per second (v2.0.0) βœ” βœ” βœ”
hide progress bars or be silent (or some variation) (v2.0.0) βœ” βœ” βœ”
automatically tune scans based on errors/403s/429s (v2.1.0) βœ”
automatically stop scans based on errors/403s/429s (v2.1.0) βœ” βœ”
run scans in parallel (1 process per target) (v2.2.0) βœ”
huge number of other options βœ”

Of note, there's another written-in-rust content discovery tool, rustbuster. I came across rustbuster when I was naming my tool ( 😒 ). I don't have any experience using it, but it appears to be able to do POST requests with an HTTP body, has SOCKS support, and has an 8.3 shortname scanner (in addition to vhost dns, directory, etc...). In short, it definitely looks interesting and may be what you're looking for as it has some capability I haven't seen in similar tools.

🀯 Common Problems/Issues (FAQ)

No file descriptors available

Why do I get a bunch of No file descriptors available (os error 24) errors?


There are a few potential causes of this error. The simplest is that your operating system sets an open file limit that is aggressively low. Through personal testing, I've found that 4096 is a reasonable open file limit (this will vary based on your exact setup).

There are quite a few options to solve this particular problem, of which a handful are shown below.

Increase the Number of Open Files

We'll start by increasing the number of open files the OS allows. On my Kali install, the default was 1024, and I know some MacOS installs use 256 πŸ˜• .

Edit /etc/security/limits.conf

One option to up the limit is to edit /etc/security/limits.conf so that it includes the two lines below.

  • * represents all users
  • hard and soft indicate the hard and soft limits for the OS
  • nofile is the number of open files option.
/etc/security/limits.conf
-------------------------
...
*        soft nofile 4096
*        hard nofile 8192
...
Use ulimit directly

A faster option, that is not persistent, is to simply use the ulimit command to change the setting.

ulimit -n 4096

Additional Tweaks (may not be needed)

If you still find yourself hitting the file limit with the above changes, there are a few additional tweaks that may help.

This section was shamelessly stolen from this stackoverflow answer. More information is included in that post and is recommended reading if you end up needing to use this section.

✨ Special thanks to HTB user @sparkla for their help with identifying these additional tweaks ✨

Increase the ephemeral port range, and decrease the tcp_fin_timeout.

The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are

  • net.ipv4.ip_local_port_range = 32768 61000
  • net.ipv4.tcp_fin_timeout = 60

This basically means your system cannot consistently guarantee more than (61000 - 32768) / 60 = 470 sockets per second.

sudo sysctl net.ipv4.ip_local_port_range="15000 61000"
sudo sysctl net.ipv4.tcp_fin_timeout=30
Allow socket reuse while in a TIME_WAIT status

This allows fast cycling of sockets in time_wait state and re-using them. Make sure to read post Coping with the TCP TIME-WAIT from Vincent Bernat to understand the implications.

sudo sysctl net.ipv4.tcp_tw_reuse=1 

Progress bars print one line at a time

feroxbuster needs a terminal width of at least the size of what's being printed in order to do progress bar printing correctly. If your width is too small, you may see output like what's shown below.

small-term

If you can, simply make the terminal wider and rerun. If you're unable to make your terminal wider consider using -q to suppress the progress bars.

What do each of the numbers beside the URL mean?

Please refer to this section where each number's meaning and how to use it to filter responses is discussed.

Connection closed before message completed

The error in question can be boiled down to 'networking stuff'. feroxbuster uses reqwest which uses hyper to make requests to the server. This issue report to the hyper project explains what is happening (quoted below to save you a click). This isn't a bug so much as it's a target-specific tuning issue. When lowering the -t value, the error doesn't occur (or happens much less frequently).

This isn't a bug. Simply slow down the scan. A -t value of 50 was chosen as a sane default that's still quite fast out of the box. However, network related errors may occur when the client and/or server become over-saturated. The Threads and Connection Limits At A High-Level section details how to accomplish per-target tuning.

This is just due to the racy nature of networking.

hyper has a connection pool of idle connections, and it selected one to send your request. Most of the time, hyper will receive the server's FIN and drop the dead connection from its pool. But occasionally, a connection will be selected from the pool and written to at the same time the server is deciding to close the connection. Since hyper already wrote some of the request, it can't really retry it automatically on a new connection, since the server may have acted already.

SSL Error routines:tls_process_server_certificate:certificate verify failed

In the event you see an error similar to

self-signed

error trying to connect: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1913: (self signed certificate)

You just need to add the -k|--insecure flag to your command.

feroxbuster rejects self-signed certs and other "insecure" certificates/site configurations by default. You can choose to scan these services anyway by telling feroxbuster to ignore insecure server certs.

Comments
  • add warning if wordlist item begins with forward slash

    add warning if wordlist item begins with forward slash

    Hello author: When I use feroxbuster, for example: http://xxx.com/a/api/, my idea is to scan the /api/ directory, but I look at the results of feroxbuster, it will scan the directory under xxx.com/, But it will not scan the directory under /a/api/. In fact, there are results in the /api/ directory. I have seen it under ffuf. Does feroxbuster have this parameter?

    enhancement good first issue pinned 
    opened by duokebei 25
  • Implement random user agent flag

    Implement random user agent flag

    Closes https://github.com/epi052/feroxbuster/issues/352

    Branching checklist

    • [x] There is an issue associated with your PR (bug, feature, etc.. if not, create one)
    • [x] Your PR description references the associated issue (i.e. fixes #123456)
    • [x] Code is in its own branch
    • [x] Branch name is related to the PR contents
    • [x] PR targets main

    Static analysis checks

    • [x] All rust files are formatted using cargo fmt
    • [x] All clippy checks pass when running cargo clippy --all-targets --all-features -- -D warnings -A clippy::mutex-atomic
    • [ ] All existing tests pass

    Documentation

    • [ ] New code is documented using doc comments
    • [x] Documentation about your PR is included in the README, as needed

    Additional Tests

    • [ ] New code is unit tested
    • [ ] New code is integration tested, as needed
    • [ ] New tests pass
    opened by dsaxton 19
  • Add support of multiple methods during scan #440

    Add support of multiple methods during scan #440

    Add support of multiple methods during scan #440

    Landing a Pull Request (PR)

    Long form explanations of most of the items below can be found in the CONTRIBUTING guide.

    Branching checklist

    • [x] There is an issue associated with your PR (bug, feature, etc.. if not, create one)
    • [x] Your PR description references the associated issue (i.e. fixes #123456)
    • [x] Code is in its own branch
    • [x] Branch name is related to the PR contents
    • [x] PR targets main

    Static analysis checks

    • [x] All rust files are formatted using cargo fmt
    • [x] All clippy checks pass when running cargo clippy --all-targets --all-features -- -D warnings -A clippy::mutex-atomic
    • [x] All existing tests pass

    Documentation

    • [ ] New code is documented using doc comments
    • [ ] Documentation about your PR is included in the README, as needed

    Additional Tests

    • [ ] New code is unit tested
    • [ ] New code is integration tested, as needed
    • [ ] New tests pass
    opened by MD-Levitan 18
  • Publish to Arch User Repository

    Publish to Arch User Repository

    Hi, I love Feroxbuster and wanted to help more users access it so as an Arch Linux user, I published it to the AUR (Arch User Repository) under the package name feroxbuster.

    This pull requests just appends to the installation instructions of the README.md telling users how they can install it from Arch Linux.

    Thank you for making an awesome project! If you have an AUR account and would like me to transfer ownership of the package to you, I'd be happy to do that as well.

    opened by spikecodes 17
  • [FEATURE REQUEST] Spider mode for feroxbuster

    [FEATURE REQUEST] Spider mode for feroxbuster

    Is your feature request related to a problem? Please describe.

    It would be interesting if feroxbuster had a "spider-mode," which would really just use be the --extract-links behavior without using a word list. This would make for a nice option if ever a user wants to get a quick map of a site without also spraying the server with a lot of requests that are likely to fail.

    Describe the solution you'd like

    One approach could be something like feroxbuster -u https://example.com --spider which only requests the root path and then recursively fetches based on links that are found. This would pretty much just be an alias that activates functionality that feroxbuster already has, but in a more expressive and user-friendly way.

    Describe alternatives you've considered

    I've only tried using a very small dummy word list along with --extract-links, but maybe there is a simpler way I haven't thought of.

    enhancement 
    opened by dsaxton 16
  • [FEATURE REQUEST] HTML report generation

    [FEATURE REQUEST] HTML report generation

    Is your feature request related to a problem? Please describe. Currently the tool supports a limited type of output formats. Generating a HTML report is one solution to very easily review the results generated.

    Describe the solution you'd like ffuf has a good working example of this. Anything that has some sort of DataTables on top, to be able to do sorting/search is more than enough. An advanced version of this feature would be to combine all the results from parallel/multiple scans in one HTML report file - which would be awesome.

    Describe alternatives you've considered None.

    Additional context Thanks for your work! This tool seems amazing, good work πŸ™

    enhancement good first issue pinned 
    opened by Regala 16
  • 'tokio-runtime-worker' panicked

    'tokio-runtime-worker' panicked

    Describe the bug feroxbuster exits since version 1.11.0 with tokio runtime panick error message

    To Reproduce Steps to reproduce the behavior: feroxbuster --url URL -o feroxbuster.txt (in this case URL is something in 192.168..)

    Traceback / Error Output thread 'tokio-runtime-worker' panicked at 'supplied instant is later than self', library/std/src/time.rs:275:48 stack backtrace: 0: 0x6f771a - 1: 0x4a61fc - 2: 0x6f6d71 - 3: 0x6f67da - 4: 0x7119fa - 5: 0x7119c4 - 6: 0x71197d - 7: 0x4a3900 - 8: 0x4aa742 - 9: 0x646c63 - 10: 0x62e8b5 - 11: 0x62d94e - 12: 0x6e0259 - 13: 0x43d72d - 14: 0x438cbf - 15: 0x421d4b - 16: 0x72ba20 - 17: 0x7295ad - 18: 0x7288ff - 19: 0x72861b - 20: 0x722915 - zsh: abort RUST_BACKTRACE=full ../feroxbuster --url URL -o

    Environment (please complete the following information):

    • feroxbuster version: 1.11.0
    • Linux kali 5.9.0-kali5-amd64 #1 SMP Debian 5.9.15-1kali1 (2020-12-18) x86_64 GNU/Linux

    Additional context The previous version did not panicked on me...

    bug wontfix 
    opened by saraiva 16
  • [BUG] Output flooding on some specific case

    [BUG] Output flooding on some specific case

    Describe the bug

    Hey!

    Running scan against scanfactory.io results in the terminal being flooded with enormous amount of data.

    2021-10-01_15-58 2021-10-01_15-54

    I am not sure of what is happenning here. Looks like it starts recursive bruteforcing and with that outputs every single request it has.

    This problem is seen not only on scanfactory website but this is the first website that has the described problem and conscent to scan them.

    To Reproduce

    Steps to reproduce the behavior:

    1. Run feroxbuster -w ~/wordlists/no-extensions.txt -k --url https://scanfactory.io

    Expected behavior

    Expected to not have huge output of every request being sent

    Environment (please complete the following information):

    Arch linux Feroxbuster: v2.3.3 Alacritty (but same in urxvt)

    bug unconfirmed 
    opened by c0rv4x 14
  • [FEATURE REQUEST] Structured log output (JSON lines, ideally)

    [FEATURE REQUEST] Structured log output (JSON lines, ideally)

    Currently, the output format is line-based, but to parse it, you need to split it based on whitespace separators. This isn't a huge problem, but it's not the cleanest way to programmatically analyze with large collections of output files, for the purpose of eliminate "uninteresting" findings- e.g. results that are similarly sized, but not exactly the same- generic responses with some small amount of varing content, like the URL itself, or a unique transaction id or log/error id

    It would be great if log output could be written in line-based NDJSON format. This allows for much quicker, cleaner and mistake-free loading of the files into the language of your choice for doing automated analysis on a large set of output files. In python, for example, you can read the lines, use split() on them, and then manually name the fields, or keep them as a tuple with the column values in them. It would be preferred to simply load a JSON file. This is especially useful for error conditions, which are a little exceptional in their formatting compared with successful outputs

    An alternative of course is just parsing the line-based output using either regex or primitive whitespace splitting- this isn't by any means impossible or even difficult, but it takes some trial and error to handle exceptional lines that may not be common (or you can just ignore these lines)

    Just to clarify, I realize this is not something as useful for looking at a single output file as it is for looking at a large collection of outputs from a parallelized batch during a large penetration test that may have output from 1000 or more sites- this use-case is similar to what I described in #123

    Thoughts?

    Thanks again!

    enhancement has-PR 
    opened by mzpqnxow 14
  • [FEATURE REQUEST] Consider failing after excessive resource oversteps to avoid storage space exhaustion

    [FEATURE REQUEST] Consider failing after excessive resource oversteps to avoid storage space exhaustion

    I recently ran into an issue where I had my user file descriptor (nofile) soft and hard limit set too low for the way I was using feroxbuster, causing some undesired output. The file descriptor is obviously a system configuration issue and not the fault of feroxbuster, and I addressed it. To be clear, I have no reason to think (at this time) that there is any issue with file descriptors leaking- that's not what this issue is about. I simply was too agressive with feroxbuster given my fileno rlimit. That said, please see the additional comment I added to this issue

    What drew my attention to the resource overstep/failure was generation of an extremely large output file from a single instance of feroxbuster running at the time. I was using a wordlist with 42,000 lines. The output file that was created for a single instance of feroxbuster eventually became 12GB, which exhausted my free diskspace and finally caused feroxbuster to die.

    In my opinion, it would be desirable for feroxbuster to exit upon repeated file descriptor creation failures as opposed to continuously logging the failure to acquire a descriptor

    The reason I'm leaning towards this being treated as a bug or feature request, rather than just making sure the file descriptor limit is set high enough on the system is that despite only having 42,000 words in the wordlist, at the time the process died, the log file for the run was 61,208,507 lines (12GB, as I said)

    This is due partially to how feroxbuster recurses/spiders into discovered directories, though even still that seems a bit high- though it was worth pointing out- the file was too large for me to | sort | uniq -c to see if there were duplicate entries, but it seems irrelevant if the behavior of feroxbuster is changed to be more defensive when it encounters resource limits.

    Given that upon encountering a hard resource limit produced such a large file, might it make more sense to fail early and hard when resource limits are hit? Or maybe as a middle-ground, after some threshold for the count of errors, in the event that other processes on the system are only temporarily holding the bulk of the available file descriptors and may soon release them? Maybe sleep and retry logic would be appropriate?

    Additional context

    • Debian 10 system
    • nofile set to 100,000 for my user (soft and hard)
    • 10 concurrent instances of feroxbuster running at once (which caused this limit to be reached) each using -r -t 30
    • Used cargo to build feroxbuster from source, feroxbuster -V showing 1.5.3

    Thanks, and sorry for the flurry of bug/feature issues, I've taken a lot of interesting in the project and hope that some of these can be helpful

    enhancement has-PR 
    opened by mzpqnxow 13
  • [BUG] Skipping words in wordlist

    [BUG] Skipping words in wordlist

    Describe the bug Words are getting skipped in the wordlist when I use a depth past 1.

    To Reproduce Steps to reproduce the behavior: These scans are being run again the vulnhub machine Prime 1 (https://www.vulnhub.com/entry/prime-1,358/)

    1. Does detect secret.txt: feroxbuster --url http://10.0.0.10 -w ./raft-small-words.txt -x txt -t 200 -o feroxbuster_test -d 1
    2. Does not detect secret.txt: feroxbuster --url http://10.0.0.10 -w ./raft-small-words.txt -x txt -t 200 -o feroxbuster_test -d 2

    If I move the word "secret" to the beginning of the word list then feroxbuster does find it even with using a depth greater than 1.

    Expected behavior A clear and concise description of what you expected to happen.

    Traceback / Error Output If applicable, add error output to help explain your problem.

    Environment (please complete the following information):

    • feroxbuster version: 2.3.3
    • OS: Linux kali 5.10.0-kali9-amd64 #1 SMP Debian 5.10.46-4kali1 (2021-08-09) x86_64 GNU/Linux
    bug unconfirmed 
    opened by Pusher91 12
  • Bump tokio from 1.23.0 to 1.24.1

    Bump tokio from 1.23.0 to 1.24.1

    Bumps tokio from 1.23.0 to 1.24.1.

    Release notes

    Sourced from tokio's releases.

    Tokio v1.24.1

    This release fixes a compilation failure on targets without AtomicU64 when using rustc older than 1.63. (#5356)

    #5356: tokio-rs/tokio#5356

    Tokio v1.24.0

    The highlight of this release is the reduction of lock contention for all I/O operations (#5300). We have received reports of up to a 20% improvement in CPU utilization and increased throughput for real-world I/O heavy applications.

    Fixed

    • rt: improve native AtomicU64 support detection (#5284)

    Added

    • rt: add configuration option for max number of I/O events polled from the OS per tick (#5186)
    • rt: add an environment variable for configuring the default number of worker threads per runtime instance (#4250)

    Changed

    • sync: reduce MPSC channel stack usage (#5294)
    • io: reduce lock contention in I/O operations (#5300)
    • fs: speed up read_dir() by chunking operations (#5309)
    • rt: use internal ThreadId implementation (#5329)
    • test: don't auto-advance time when a spawn_blocking task is running (#5115)

    #5186: tokio-rs/tokio#5186 #5294: tokio-rs/tokio#5294 #5284: tokio-rs/tokio#5284 #4250: tokio-rs/tokio#4250 #5300: tokio-rs/tokio#5300 #5329: tokio-rs/tokio#5329 #5115: tokio-rs/tokio#5115 #5309: tokio-rs/tokio#5309

    Tokio v1.23.1

    This release forward ports changes from 1.18.4.

    Fixed

    • net: fix Windows named pipe server builder to maintain option when toggling pipe mode (#5336).

    #5336: tokio-rs/tokio#5336

    Commits
    • 31c7e82 chore: prepare Tokio v1.24.1 (#5357)
    • 8d8db27 tokio: add load and compare_exchange_weak to loom StaticAtomicU64 (#5356)
    • dfe252d chore: prepare Tokio v1.24.0 release (#5353)
    • 21b233f test: bump version of async-stream (#5347)
    • 7299304 Merge branch 'tokio-1.23.x' into master
    • 1a997ff chore: prepare Tokio v1.23.1 release
    • a8fe333 Merge branch 'tokio-1.20.x' into tokio-1.23.x
    • ba81945 chore: prepare Tokio 1.20.3 release
    • 763bdc9 ci: run WASI tasks using latest Rust
    • 9f98535 Merge remote-tracking branch 'origin/tokio-1.18.x' into fix-named-pipes-1.20
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump console from 0.15.3 to 0.15.4

    Bump console from 0.15.3 to 0.15.4

    Bumps console from 0.15.3 to 0.15.4.

    Changelog

    Sourced from console's changelog.

    0.15.4

    Enhancements

    • Fix for regression where console size was misreported on windows. (#151)
    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump leaky-bucket from 0.12.1 to 0.12.2

    Bump leaky-bucket from 0.12.1 to 0.12.2

    Bumps leaky-bucket from 0.12.1 to 0.12.2.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • Bump predicates from 2.1.4 to 2.1.5

    Bump predicates from 2.1.4 to 2.1.5

    Bumps predicates from 2.1.4 to 2.1.5.

    Changelog

    Sourced from predicates's changelog.

    [2.1.5] - 2022-12-29

    Gixes

    • Further generalized borrowing of predicates with Borrow trait
    Commits
    • d5a4c33 chore: Release
    • 2b0a450 docs: Update changelog
    • 4e1d03c Merge pull request #134 from rshearman/owned-ord
    • 7934a3a feat: Allow into_iter predicates to own object and eval vs borrowed types
    • f9536e0 feat: Allow eq/ord predicates to own object and eval vs borrowed types
    • ee57a38 chore(ci): Update renovate
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • [FEATURE REQUEST] Adding simhash to --filter-similar-to

    [FEATURE REQUEST] Adding simhash to --filter-similar-to

    I wondered why --filter-similar-to didn't work well until I read the article and realized that it uses SSDeep, which does a terrible job of handling text with little content.

    SSDeep was selected as it does a good job of identifying near-duplicate pages once content-length reaches a certain size, while remaining performant. Other algorithms were tested but resulted in huge performance hits (orders of magnitude slower on requests/second).

    But this is a necessary test, especially for some API endpoint that returns a small amount of information. so why not add simhash for it additionally ? I find it work well when dealing with short texts. I'm expecting such an update.

    enhancement pinned 
    opened by Luoooio 1
Releases(v2.7.3)
Verdict-as-a-Service SDKs: Analyze files for malicious content

Verdict-as-a-Service Verdict-as-a-Service (VaaS) is a service that provides a platform for scanning files for malware and other threats. It allows eas

G DATA CyberDefense AG 31 Dec 9, 2022
subscout is a simple, nimble subdomain enumeration tool written in Rust language

subscout is a simple, nimble subdomain enumeration tool written in Rust language. It is designed to help bug bounty hunters, security professionals and penetration testers discover subdomains of a given target domain.

Dom Sec 5 Apr 5, 2023
A simple command line tool which quickly audits the Disallow entries of a site's robots.txt.

Domo Arigato A simple command line tool which quickly audits the Disallow entries of a site's robots.txt. Disallow entries can be used to stop search

Ember Hext 20 Apr 17, 2023
A simple password manager written in Rust

ripasso A simple password manager written in Rust. The root crate ripasso is a library for accessing and decrypting passwords stored in pass format (G

Joakim Lundborg 548 Dec 26, 2022
simple multi-threaded port scanner written in rust

knockson simple multi-threaded port scanner written in rust Install Using AUR https://aur.archlinux.org/packages/knockson-bin/ yay -Syu knockson-bin M

Josh MΓΌnte 4 Oct 5, 2022
Simple prepender virus written in Rust

Linux.Fe2O3 This is a POC ELF prepender written in Rust. I like writting prependers on languages that I'm learning and find interesting. As for the na

Guilherme Thomazi Bonicontro 91 Dec 9, 2022
A simple allocator written in Rust that manages memory in fixed-size chunks.

Simple Chunk Allocator A simple no_std allocator written in Rust that manages memory in fixed-size chunks/blocks. Useful for basic no_std binaries whe

Philipp Schuster 7 Aug 8, 2022
A fast Rust-based safe and thead-friendly grammar-based fuzz generator

Intro fzero is a grammar-based fuzzer that generates a Rust application inspired by the paper "Building Fast Fuzzers" by Rahul Gopinath and Andreas Ze

null 203 Nov 9, 2022
Fast, Concurrent, Rust based Tidal-Media-Downloader implementation.

tdl tdl is a rust implementation of the Python Script Tidal-Media-Downloader. Overview tdl offers significant performance improvements over the origin

null 42 Mar 18, 2023
Secure and fast microVMs for serverless computing.

Our mission is to enable secure, multi-tenant, minimal-overhead execution of container and function workloads. Read more about the Firecracker Charter

firecracker-microvm 20.3k Jan 1, 2023
Dangerously fast dns/network/port scanner, all-in-one

Skanuvaty Dangerously fast dns/network/port scanner, all-in-one. Start with a domain, and we'll find everything about it. Features: Finds subdomains f

CCCC 701 Dec 31, 2022
The Heros NFT Marketplace Boilerplate project is designed to let users fork, customize, and deploy their own nft marketplace app to a custom domain, ultra fast.

Heros NFT on Solana The Heros NFT Marketplace Boilerplate project is designed to let users fork, customize, and deploy their own nft marketplace app t

nightfury 6 Jun 6, 2022
A simple port scanner built using rust-lang

A simple port scanner built using rust-lang

Krisna Pranav 1 Nov 6, 2021
Simple verification of Rust programs via functional purification in Lean 2(!)

electrolysis About A tool for formally verifying Rust programs by transpiling them into definitions in the Lean theorem prover. Masters thesis: Simple

Sebastian Ullrich 300 Dec 11, 2022
A simple rust library for working with ZIP archives

rust-zip A simple rust library to read and write Zip archives, which is also my pet project for learning Rust. At the moment you can list the files in

Jorge Gorbe Moya 11 Aug 6, 2022
ctfsak is a tool to speed up common operations needed during CTFs

ctfsak (CTF Swiss Army Knife) This is a tool to help saving time during CTFs, where it's common to have to do a lot of encoding/decoding, encrypting/d

null 1 Dec 1, 2022
Binary coverage tool without binary modification for Windows

Summary Mesos is a tool to gather binary code coverage on all user-land Windows targets without need for source or recompilation. It also provides an

null 381 Dec 22, 2022
LLVM-CBE is a C-backend for LLVM, i.e. a tool that turns LLVM bitcode 1 into C

LLVM-CBE is a C-backend for LLVM, i.e. a tool that turns LLVM bitcode 1 into C. It requires to be built near LLVM, which was found to be so heavy dependency that shipping it as Cargo crate would be absurd.

Dmitrii - Demenev 2 May 26, 2022
A simple menu to keep all your most used one-liners and scripts in one place

Dama Desktop Agnostic Menu Aggregate This program aims to be a hackable, easy to use menu that can be paired to lightweight window managers in order t

null 47 Jul 23, 2022