Welcome to Testimony, the tool that holds a mirror up to our code and provides a platform for it to testify to its own correctness. We all write code we're not proud of, tucked away in private repositories, hoping it will never see the light of day. But with Testimony, you can bring that code out of the shadows and into the light of public scrutiny.
Testimony provides a simple and intuitive interface for testing Python functions and verifying their outputs against a set of input-output pairs. Simply define your test cases in a JSON file, and Testimony will automatically run your code against each one and report any failures.
With Testimony, you can be confident that your code is functioning as expected and producing the correct outputs for a variety of inputs. No more second-guessing or relying on manual testing – Testimony has got you covered.
So what are you waiting for? Let Testimony be your guide to writing better, more reliable code, and let your code testify to its own correctness.
To run the test cases, you will need to create a virtual environment and install the dependencies from the requirements.txt file:
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
To run the test cases, use the following command:
python testimony.py -m module -s subroutine
where module
is the name of the module file to pick functions from, and subroutine
is the name of the function to test.
To create new test cases, create a new JSON file named testcases_[subroutine].json
and add test cases in the following format:
[
{
"id": "01",
"input": [[0], [0]],
"expected_output": [0]
}
]
where id
is a unique identifier for the test case, input
is a list of arguments to pass to the function, and expected_output
is the expected output of the function.