Texting Robots: A Rust native `robots.txt` parser with thorough unit testing

Overview

Texting Robots

Workflow Status Crate API

Crate texting_robots is a library for parsing robots.txt files. A key design goal of this crate is to have a thorough test suite tested against real world data across millions of sites. While robots.txt is a simple specification itself the scale and complexity of the web teases out every possible edge case.

To read more about the robots.txt specification a good starting point is How Google interprets the robots.txt specification.

This library cannot guard you against all possible edge cases but should give you a strong starting point from which to ensure you and your code constitute a positive addition to the internet at large.

Installation

You can install the library by adding this entry:

[dependencies]
texting_robots = "0.2"

to your Cargo.toml dependency list.

Overview of usage

This crate provides a simple high level usage through the Robot struct.

The Robot struct is responsible for consuming the robots.txt file, processing the contents, and deciding whether a given URL is allow for your bot or not. Additional information such as your bot's crawl delay and any sitemaps that may exist are also available.

Given the many options and potential preferences Texting Robots does not perform caching or a HTTP GET request of the robots.txt files themselves. This step is up to the user of the library.

use texting_robots::{Robot, get_robots_url};

// If you want to fetch a URL we'll find the URL for `robots.txt`
let url = "https://www.rust-lang.org/learn";
let robots_url = get_robots_url(url);
// Then we fetch `robots.txt` from robots_url to parse as below

// A `robots.txt` file in String or byte format.
let txt = r"User-Agent: FerrisCrawler
Allow: /ocean
Disallow: /rust
Disallow: /forest*.py
Crawl-Delay: 10
User-Agent: *
Disallow: /
Sitemap: https://www.example.com/site.xml";

// Build the Robot for our friendly User-Agent
let r = Robot::new("FerrisCrawler", txt.as_bytes()).unwrap();

// Ferris has a crawl delay of one second per limb
// (Crabs have 10 legs so Ferris must wait 10 seconds!)
assert_eq!(r.delay, Some(10.0));

// Any listed sitemaps are available for any user agent who finds them
assert_eq!(r.sitemaps, vec!["https://www.example.com/site.xml"]);

// We can also check which pages Ferris is allowed to crawl
// Notice we can supply the full URL or a relative path?
assert_eq!(r.allowed("https://www.rust-lang.org/ocean"), true);
assert_eq!(r.allowed("/ocean"), true);
assert_eq!(r.allowed("/ocean/reef.html"), true);
// Sadly Ferris is allowed in the ocean but not in the rust
assert_eq!(r.allowed("/rust"), false);
// Ferris is also friendly but not very good with pythons
assert_eq!(r.allowed("/forest/tree/snake.py"), false);

Crawling considerations

Obtaining robots.txt

To obtain robots.txt requires performing an initial HTTP GET request to the domain in question. When handling the HTTP status codes and how they impact robots.txt the suggestions made by Google are recommended.

  • 2xx (success): Attempt to process the resulting payload
  • 3xx (redirection): Follow a reasonable number of redirects
  • 4xx (client error): Assume there are no crawl restrictions except for:
    • 429 "Too Many Requests": Retry after a reasonable amount of time (potentially set by the "Retry-After" header)
  • 5xx (server errors): Assume you should not crawl until fixed and/or interpret with care

Even when directed to "assume no crawl restrictions" it is likely reasonable and polite to use a small fetch delay between requests.

Always set a User Agent

For crawling robots.txt (and especially for crawling in general) you should include a user agent in your request. Most crawling libraries offer adding the user agent in a single line.

ClientBuilder.new().user_agent("FerrisCrawler/0.1 (https://ferris.rust/about-this-robot)")...

Beyond respecting robots.txt providing a good user agent provides a line of communication between you and the web master.

Beyond the robots.txt specification and general suggestions

texting_robots provides much of what you need for safe and respectful crawling but is not a full solution by itself.

As an example, the HTTP error code 429 (Too Many Requests) must be tracked when requesting pages on a given site. When a 429 is seen the crawler should slow down, even if obeying the Crawl-Delay set in robots.txt, and potentially using the delay set by the server's Retry-After header.

An even more complex example is that multiple domains may back on to the same backend web server. This is a common scenario for specific products or services that host thousands or millions of domains. How you rate limit fairly using the Crawl-Delay is entirely up to the end user (and potentially the service when using HTTP error code 429 to rate limit traffic).

To protect against adverse input the user of Texting Robots is also suggested to follow Google's recommendations and limit input to 500 kibibytes. This is not yet done at the library level in case a larger input may be desired but may be revisited depending on community feedback.

Usage of Texting Robots in other languages

While not yet specifically supporting any languages other than Rust the library was designed to support language integrations in the future. Battle testing this intepretation of the robots.txt specification against the web is easier done testing with friends!

A C API through Rust FFI should be relatively easy to provide given Texting Robots only relies on strings, floats, and booleans. The lack of native fetching abilities should ensure the library is portable across platforms, situations, and languages.

A proof of concept was performed in WASI, the "WebAssembly System Interface", showing that the library compiles happily and only experiences a 50% or 75% speed penalty when used with the Wasmer (LLVM backend) and Wasmtime runtimes respectively. No optimizations have been done thus far and there's likely low hanging fruit to reap.

See wasi_poc.sh for details.

Testing

To run the majority of core tests simply execute cargo test.

Unit and Integration Tests

To check Texting Robot's behaviour against the robots.txt specification almost all unit tests from Google's C++ robots.txt parser and Moz's reppy have been translated and included.

Certain aspects of the Google and Moz interpretation disagree with each other. When this occurred the author deferred to as much common sense as they were able to muster.

For a number of popular domains the robots.txt of the given domain was saved and tests written against them.

Common Crawl Test Harness

To ensure that the robots.txt parser will not panic in real world situations over 34 million robots.txt responses were passed through Texting Robots. While this test doesn't guarantee the robots.txt files were handled correctly it does ensure the parser is unlikely to panic during practice.

Many problematic, invalid, outrageous, and even adversarial robots.txt examples were discovered in this process.

For full details see the Common Crawl testing harness.

Fuzz Testing

In the local fuzz directory is a fuzz testing harness. The harness is not particularly sophisticated but does utilize a low level of structure awareness through utilizing dictionary guided fuzzing. The harness has already revealed one low level unwrap panic.

To run:

cargo fuzz run fuzz_target_1 -- -max_len=512 -dict=keywords.dict

Note:

  • cargo fuzz requires nightly (i.e. run rustup default nightly in the fuzz directory)
  • If you have multiple processors you may wish to add --jobs N after cargo run

Code Coverage with Tarpaulin

This project uses Tarpaulin to perform code coverage reporting. Given the relatively small surface area of the parser and Robot struct the coverage is high. Unit testing is more important for ensuring behavioural correctness however.

To get line numbers for uncovered code run:

cargo tarpaulin --ignore-tests -v

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

You might also like...
A minimal and fast zero-copy parser for the PE32+ file format.

peview A minimal and fast zero-copy parser for the PE32+ file format. Goal This project aims to offer a more light weight and easier to use alternativ

A parser for the .map file included in the aimware leak
A parser for the .map file included in the aimware leak

a utility I wrote to parse the map file included with the recent aimware self-leak. there is also an IDAPython script to import the symbol information into IDA.

Leetcode Solutions in Rust, Advent of Code Solutions in Rust and more

RUST GYM Rust Solutions Leetcode Solutions in Rust AdventOfCode Solutions in Rust This project demostrates how to create Data Structures and to implem

Simple autoclicker written in Rust, to learn the Rust language.

RClicker is an autoclicker written in Rust, written to learn more about the Rust programming language. RClicker was was written by me to learn more ab

Rust programs written entirely in Rust

mustang Programs written entirely in Rust Mustang is a system for building programs built entirely in Rust, meaning they do not depend on any part of

Rust 核心库和标准库的源码级中文翻译,可作为 IDE 工具的智能提示 (Rust core library and standard library translation. can be used as IntelliSense for IDE tools)

Rust 标准库中文版 这是翻译 Rust 库 的地方, 相关源代码来自于 https://github.com/rust-lang/rust。 如果您不会说英语,那么拥有使用中文的文档至关重要,即使您会说英语,使用母语也仍然能让您感到愉快。Rust 标准库是高质量的,不管是新手还是老手,都可以从中

A library for extracting #[no_mangle] pub extern "C" functions (https://docs.rust-embedded.org/book/interoperability/rust-with-c.html#no_mangle)

A library for extracting #[no_mangle] pub extern "C" functions In order to expose a function with C binary interface for interoperability with other p

clone of grep cli written in Rust. From Chapter 12 of the Rust Programming Language book

minigrep is a clone of the grep cli in rust Minigrep will find a query string in a file. To test it out, clone the project and run cargo run body poem

Rust-blog - Educational blog posts for Rust beginners

pretzelhammer's Rust blog 🦀 I write educational content for Rust beginners and Rust advanced beginners. My posts are listed below in reverse chronolo

Comments
  • Allow exporting the raw rules

    Allow exporting the raw rules

    I've been considering using this library for my application as it saves me the effort of writing yet another robots.txt parser, but the only way of interfacing with the rule-set is through the allowed function. My app has its own custom URL matcher, so I would like to be able to export all the raw rules as exposed by robots (that match my user agent), and import them in my pattern matching logic.

    I'm thinking of just exposing a rules() function that returns the contents of the private rules field, possibly as an iterator of tuple (Allow|Disallow, raw rule glob string). Would you be open to a PR?

    opened by 1player 5
Owner
Stephen Merity
Stephen Merity
Shuttle is a library for testing concurrent Rust code

Shuttle Shuttle is a library for testing concurrent Rust code. It is an implementation of a number of randomized concurrency testing techniques, inclu

Amazon Web Services - Labs 373 Dec 27, 2022
rusty-riscy is a performance testing and system resource monitoring tool written in Rust to benchmark RISC-V processors.

rusty-riscy rusty-riscy is a performance testing and system resource monitoring tool written in Rust to benchmark RISC-V processors. Objectives To cre

Suhas KV 4 May 3, 2022
OP-Up is a hive tool for testing OP-Stack-compatible software modules

op-up Warning This is a work in progress. OP-Up is a hive tool for testing OP-Stack-compatible software modules. This project was born out of the need

nicolas 20 Jun 13, 2023
A library for transcoding between bytes in Astro Notation Format and Native Rust data types.

Rust Astro Notation A library for transcoding between hexadecimal strings in Astro Notation Format and Native Rust data types. Usage In your Cargo.tom

Stelar Software 1 Feb 4, 2022
A cloud-native distributed serverless workers platform.

rusty-workers A cloud-native distributed serverless workers platform. Features JavaScript and WebAssembly engine powered by V8 Fetch API Highly scalab

Heyang Zhou 1.8k Jan 2, 2023
An inquiry into nondogmatic software development. An experiment showing double performance of the code running on JVM comparing to equivalent native C code.

java-2-times-faster-than-c An experiment showing double performance of the code running on JVM comparing to equivalent native C code ⚠️ The title of t

xemantic 49 Aug 14, 2022
An AI-native lightweight, reliable, and high performance open-source vector database.

What is OasysDB? OasysDB is a vector database that can be used to store and query high-dimensional vectors. Our goal is to make OasysDB fast and easy

Oasys 3 Dec 25, 2023
A parser for the perf.data format

linux-perf-data This repo contains a parser for the perf.data format which is output by the Linux perf tool. It also contains a main.rs which acts sim

Markus Stange 8 Dec 29, 2022
CSGO demo parser for Python

CSGO demo parser for Python Demo parser for Counter-Strike: Global Offensive. Parser is used to collect data from replay files (".dem" files). The goa

null 11 Dec 7, 2022
Extensible BBCode parser with scoping rules, auto close tags

More BBCode parsers? Yeah! I needed something highly extensible, flexible, and specifically WITH scoping rules so it always produces correct HTML. For

Carlos Sanchez 2 Dec 18, 2022