Testing Framework for Rust

Overview

Build Status Crates Package Status Codacy Badge License: MIT

Polish

Polish is Test-Driven Development done right

asciicast

Getting Started

Installing the Package

The crates.io package is kept up-to-date with all the major changes which means you can use it by simply including the following in your Cargo.toml under your dependencies section:

polish = "*"

Replace * with the version number shown in the crates.io badge above

But if you'd like to use nightly (most recent) releases, you can include the GitHub package repo instead:

polish = { git = "https://github.com/alkass/polish", branch = "master" }

Writing Test Cases

single Test Cases

The simplest test case can take the following form:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn my_test_case(logger: &mut Logger) -> TestCaseStatus {
  // TODO: Your test case code goes here
  TestCaseStatus::PASSED // Other valid statuses are (FAILED, SKIPPED, and UNKNOWN)
}

fn main() {
  let test_case = TestCase::new("Test Case Title", "Test Case Criteria", Box::new(my_test_case));
  TestRunner::new().run_test(test_case);
}

This produces the following:

The example listed above is available here

You can also pass a Rust closure instead of a function pointer as so:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn main() {
  let test_case = TestCase::new("Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  }));
  TestRunner::new().run_test(test_case);
}

The example listed above is available here

Multiple Test Cases

You can run multiple test cases as follows:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn main() {
  let mut runner = TestRunner::new();
  runner.run_test(TestCase::new("1st Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  })));
  runner.run_test(TestCase::new("2nd Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  })));
  runner.run_test(TestCase::new("3rd Test Case Title", "Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
    // TODO: Your test case code goes here
    TestCaseStatus::PASSED
  })));
}

But a more convenient way would be to pass a Vector of your test cases to run_tests as so:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
use polish::logger::Logger;

fn main() {
    let my_tests = vec![
      TestCase::new("1st Test Case Title", "1st Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::PASSED
      })),
      TestCase::new("2nd Test Case Title", "2nd Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::UNKNOWN
      })),
      TestCase::new("3rd Test Case Title", "3rd Test Case Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::FAILED
      }))];
    TestRunner::new().run_tests(my_tests);
}

This produces the following:

The example listed above is available here

Embedded Test Cases

You may choose to have a set of test cases as part of an object to test that object itself. For that, a clean way of writing your test cases would be to implement the Testable trait. Following is an example:

extern crate polish;

use polish::test_case::{TestRunner, TestCaseStatus, TestCase, Testable};
use polish::logger::Logger;

struct MyTestCase;
impl Testable for MyTestCase {
  fn tests(self) -> Vec<TestCase> {
    vec![
      TestCase::new("Some Title #1", "Testing Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
        // TODO: Your test case goes here
        TestCaseStatus::PASSED
      })),
      TestCase::new("Some Title #2", "Testing Criteria", Box::new(|logger: &mut Logger| -> TestCaseStatus {
      // TODO: Your test case goes here
      TestCaseStatus::SKIPPED
    }))]
  }
}

fn main() {
  TestRunner::new().run_tests_from_class(MyTestCase {});
}

This produces the following:

The example listed above is available here

Attributes

Attributes allow you to change the behaviour of how your test cases are run. For instance, by default, your TestRunner instance will run all your test cases regardless of whether any have failed. If you, however, want this behaviour changed, you will need to specifically tell your TestRunner instance to stop the process at the first failure.

THIS FEATURE IS STILL WORK-IN-PROGRESS. THIS DOCUMENT WILL BE UPDATED WITH TECHNICAL DETAILS ONCE THE FEATURE IS COMPLETE.

Logging

The logger object that's passed to each test case offers 4 logging functions (pass, fail, warn, and info). Each of these functions take a message argument of type String which allows you to use the format! macro to format your logs, e.g.:

logger.info(format!("{} + {} = {}", 1, 2, 1 + 2));
logger.pass(format!("{id}: {message}", id = "alkass", message = "this is a message"));
logger.warn(format!("about to fail"));
logger.fail(format!("failed with err_code: {code}", code = -1));

This produces the following:

The example listed above is available here

If your test case return status is UNKNOWN and you've printed at least one fail log from within the test case function, your test case result will be marked as FAILED. Otherwise, your test case will be marked as PASSED.

Author

Fadi Hanna Al-Kass

You might also like...
Rust testing library
Rust testing library

K9 - Rust Testing Library Snapshot testing + better assertions Available test macros snapshot assert_equal assert_greater_than assert_greater_than_or_

Rustress - stress testing library in Rust. For fun

rustress Simple network stress testing library. To get familiar with Rust Planned features (Subject to change) Multithreaded client/server Throughput

insta: a snapshot testing library for Rust
insta: a snapshot testing library for Rust

insta: a snapshot testing library for Rust Introduction Snapshots tests (also sometimes called approval tests) are tests that assert values against a

Rnp - A simple cloud-friendly tool for testing network reachability.

Rnp - A simple cloud-friendly tool for testing network reachability. Release Status Crates.io Github release Nuget packages NOTE: This project is in e

Simple assertion library for unit testing in python with a fluent API

Simple assertions library for unit testing in Python with a nice fluent API. Supports both Python 2 and 3.

Viceroy provides local testing for developers working with Compute@Edge.
Viceroy provides local testing for developers working with Compute@Edge.

Viceroy provides local testing for developers working with Compute@Edge. It allows you to run services written against the Compute@Edge APIs on your local development machine, and allows you to configure testing backends for your service to communicate with.

🧵 Generate self-describing strings of a given length to help aid software testing
🧵 Generate self-describing strings of a given length to help aid software testing

rust-counter-strings Counter strings generator written in rust to help aid software testing What is a counterstring? "A counterstring is a graduated s

This is a tiny (but delightful!) utility library for exhaustive testing.

Exhaustigen This is a tiny (but delightful!) utility library for exhaustive testing. It is based (directly) on the idea and code in the following blog

A series of utility macros for outputting testing results.

test-results A series of utility macros for outputting testing results. Getting Started Simply add the test-results crate to your project's Cargo.toml

Comments
  • Reducing test output verbosity

    Reducing test output verbosity

    The output when running tests, while nicely formatted, is very verbose. Is there a way to reduce the output to just the name of each test run and its status?

    For example, I have two tests defined as follows:

    use super::*;
    use polish::test_case::{TestRunner, TestCaseStatus, TestCase};
    use polish::logger::Logger;
    
    #[test]
    fn tests() {
        TestRunner::new().run_tests(vec![
            TestCase::new("BrakeAmt::new()",
                          "calling with no input succeeds",
                          Box::new(|_logger: &mut Logger| -> TestCaseStatus {
    
                // GIVEN the method under test
                let expected_result = FrictBrakeAmt(Unorm::default());
                let sut = FrictBrakeAmt::new;
    
                // WHEN a BrakeAmt is created
                let result = sut();
    
                // THEN the request should succeed, containing the expected value
                match result == expected_result {
                    true  => TestCaseStatus::PASSED,
                    false => TestCaseStatus::FAILED,
                }
            })),
    
            TestCase::new("BrakeAmt::from_unorm()",
                          "calling with a unorm value succeeds",
                          Box::new(|_logger: &mut Logger| -> TestCaseStatus {
    
                // GIVEN the method under test
                let test_value = 0.42;
                #[allow(result_unwrap_used)]
                let unorm = Unorm::from_f64(test_value).unwrap();
                let expected_result = FrictBrakeAmt(unorm);
                let sut = FrictBrakeAmt::from_unorm;
    
                // WHEN a BrakeAmt is created
                let result = sut(unorm);
    
                // THEN the request should succeed, containing the expected value
                match result == expected_result {
                  true  => TestCaseStatus::PASSED,
                  false => TestCaseStatus::FAILED,
                }
            })),
        ]);
    }
    

    They yield the following output:

    running 1 test
    Starting BrakeAmt::new() at 14:39:10 on 2017-12-21
    Ended BrakeAmt::new() at 14:39:10 on 2017-12-21
    calling with no input succeeds ... ✅
    0 PASS  0 FAIL  0 WARN  0 INFO
    Starting BrakeAmt::from_unorm() at 14:39:10 on 2017-12-21
    Ended BrakeAmt::from_unorm() at 14:39:10 on 2017-12-21
    calling with a unorm value succeeds ... ✅
    0 PASS  0 FAIL  0 WARN  0 INFO
    
    BrakeAmt::new() (calling with no input succeeds) ... 1ns
    BrakeAmt::from_unorm() (calling with a unorm value succeeds) ... 1ns
    
    Ran 2 test(s) in 2ns
    2 Passed  0 Failed  0 Skipped
    test types::unit_tests::tests ... ok
    
    test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
    

    The verbosity obscures the test results. Ideally, I'd like to see just a simple namespaced list with a global summary at the bottom (across all modules and workspace crates):

    ✅ chal::types::unit_tests::tests::BrakeAmt::new() (calling with no input succeeds) ... 1ns
    ✅ chal::types::unit_tests::tests::BrakeAmt::from_unorm() (calling with a unorm value succeeds) ... 1ns
    
    Ran 2 test(s) in 2ns...  ok
    2 Passed  0 Failed  0 Skipped
    

    Possible?

    enhancement 
    opened by U007D 23
  • Running multiple THEN's per test case

    Running multiple THEN's per test case

    I have a test suite set up as follows:

    #[test]
    fn tests() {
        TestRunner::new()
            .set_module_path(module_path!())
            .set_attributes(TestRunnerAttributes.disable_final_stats | TestRunnerAttributes.minimize_output)
            .set_time_unit(TestRunnerTimeUnits.microseconds)
            .run_tests(vec![
                TestCase::new("App::run()", "yields arch width", Box::new(|_logger: &mut Logger| -> TestCaseStatus {
                    // GIVEN an app
                    let mock_width = 42;
                    let expected_result = Ok::<String, Error>(format!("Hello, {}-bit world!", mock_width));
                    let mock = MockArch::new(mock_width);
                    let sut = App::new(&mock);
    
                    // WHEN the app is run
                    let result = sut.run();
    
                    // THEN the result should contain the expected architecture width
                    match result == expected_result {
                        true => TestCaseStatus::PASSED,
                        false => TestCaseStatus::FAILED,
                    }
                })),
                TestCase::new("App::run()", "calls Info::width() once", Box::new(|_logger: &mut Logger| -> TestCaseStatus {
                    // GIVEN an app
                    let mock_width = 42;
                    let mock = MockArch::new(mock_width);
                    let sut = App::new(&mock);
    
                    // WHEN the app is run
                    let _ = sut.run();
    
                    // THEN the app should have called Info::width() exactly once
                    match mock.width_times_called.get() == 1 {
                        true => TestCaseStatus::PASSED,
                        false => TestCaseStatus::FAILED,
                    }
                })),
            ]);
    }
    

    Instead of repeating nearly the entire test case, I would prefer to simply add an additional THEN clause to the end of the first test case, like so:

    fn tests() {
        TestRunner::new()
            .set_module_path(module_path!())
            .set_attributes(TestRunnerAttributes.disable_final_stats | TestRunnerAttributes.minimize_output)
            .set_time_unit(TestRunnerTimeUnits.microseconds)
            .run_tests(vec![
                TestCase::new("App::run()", "yields arch width", Box::new(|_logger: &mut Logger| -> TestCaseStatus {
                    // GIVEN an app
                    let mock_width = 42;
                    let expected_result = Ok::<String, Error>(format!("Hello, {}-bit world!", mock_width));
                    let mock = MockArch::new(mock_width);
                    let sut = App::new(&mock);
    
                    // WHEN the app is run
                    let result = sut.run();
    
                    // THEN the result should contain the expected architecture width
                    match result == expected_result {
                        true => TestCaseStatus::PASSED,
                        false => TestCaseStatus::FAILED,
                    }
    
                    // AND_THEN the app should have called Info::width() exactly once
                    match mock.width_times_called.get() == 1 {
                        true => TestCaseStatus::PASSED,
                        false => TestCaseStatus::FAILED,
                    }
                })),
            ]);
    }
    

    Obviously, this won't compile given the current signature. The multi-test test case becomes more important as the complexity of the tests increase, by keeping the amount of duplicated code to a minimum.

    Your version is at 0.9, so I thought I would bring this up before you stabilize your API at 1.0, just in case it ends up being a breaking change.

    Trying not to break the API, here is one idea that might work:

    fn tests() {
        TestRunner::new()
            .set_module_path(module_path!())
            .set_attributes(TestRunnerAttributes.disable_final_stats | TestRunnerAttributes.minimize_output)
            .set_time_unit(TestRunnerTimeUnits.microseconds)
            .run_tests(vec![
                ResultTestCase::new("App::run()", "yields arch width", Box::new(|_logger: &mut Logger| -> Result<(), TestCaseStatus::FAILED>  {
                    // GIVEN an app
                    let mock_width = 42;
                    let expected_result = Ok::<String, Error>(format!("Hello, {}-bit world!", mock_width));
                    let mock = MockArch::new(mock_width);
                    let sut = App::new(&mock);
    
                    // WHEN the app is run
                    let result = sut.run();
    
                    // THEN the result should contain the expected architecture width
                    test_case_assert_eq(match result, expected_result)?
    
                    // AND_THEN the app should have called Info::width() exactly once
                    test_case_assert_eq(mock.width_times_called.get(), 1)?
                })),
            ]);
    }
    

    The benefits are that the test fails at the exact line where the test fails. This means that a developer can read the error message and will know the precise issue without having to enter debug. Contrasted with:

    ...
                    let test_case_status = TestCaseStatus::FAILED;
                    // THEN the result should contain the expected architecture width
                    test_case_status = match result == expected_result {
                        true => TestCaseStatus::PASSED,
                        false => TestCaseStatus::FAILED,
                    }
                    // AND_THEN the app should have called Info::width() exactly once
                    test_case_status = match mock.width_times_called.get() == 1 {
                        true => TestCaseStatus::PASSED,
                        false => TestCaseStatus::FAILED,
                    }
                    // AND_THEN ...
                    ...
    
                    // AND_THEN ...
                    ...
    
                    test_case_status
                })),
    

    where a) state is required to be maintained by the developer, and b) in the event of a failure, the specific sub-test which failed is lost, necessitating c) a debug session.

    Anyway, this is not urgent or anything--this is just a thought I wanted to share with you. Please let me know if you have thoughts on other ways to achieve this.

    opened by U007D 3
Owner
Fadi Hanna Al-Kass
My name is Fadi but you can call me a Firmware fanatic
Fadi Hanna Al-Kass
Testing Framework for Rust

Polish Polish is Test-Driven Development done right Getting Started Installing the Package The crates.io package is kept up-to-date with all the major

Fadi Hanna Al-Kass 49 Dec 18, 2022
Declarative Testing Framework

Demonstrate allows tests to be written without as a much repetitive code within the demonstrate! macro, which will generate the corresponding full tests.

Austin Baugh 41 Aug 17, 2022
🔥 Unit testing framework for Subgraph development on The Graph protocol. ⚙️

?? Welcome to Matchstick - a unit testing framework for The Graph protocol. Try out your mapping logic in a sandboxed environment and ensure your hand

null 157 Dec 20, 2022
ArchTest is a rule based architecture testing tool for rust

ArchTest is a rule based architecture testing tool. It applies static analyses on the specified rust project to extract use relationships.

Tom Dymel 7 Sep 26, 2021
Automated property based testing for Rust (with shrinking).

quickcheck QuickCheck is a way to do property based testing using randomly generated input. This crate comes with the ability to randomly generate and

Andrew Gallant 2k Jan 2, 2023
Hypothesis-like property testing for Rust

Proptest Introduction Proptest is a property testing framework (i.e., the QuickCheck family) inspired by the Hypothesis framework for Python. It allow

Jason Lingle 1.1k Jan 1, 2023
Simple goldenfile testing in Rust.

?? Rust Goldenfile Simple goldenfile testing in Rust. Goldenfile tests generate one or more output files as they run. At the end of the test, the gene

Calder Coalson 24 Nov 26, 2022
Loom is a concurrency permutation testing tool for Rust.

Loom is a testing tool for concurrent Rust code

Tokio 1.4k Jan 9, 2023
Drill is an HTTP load testing application written in Rust inspired by Ansible syntax

Drill is an HTTP load testing application written in Rust inspired by Ansible syntax

Ferran Basora 1.5k Jan 1, 2023
assay - A super powered testing macro for Rust

assay - A super powered testing macro for Rust as·say /ˈaˌsā,aˈsā/ noun - the testing of a metal or ore to determine its ingredients and quality. Rust

Michael Gattozzi 105 Dec 4, 2022