Skip to content

Instantly share code, notes, and snippets.

@DeeprajPandey
Created May 5, 2023 11:35
Show Gist options
  • Save DeeprajPandey/8b906f4ca90ff9504222193076158542 to your computer and use it in GitHub Desktop.
Save DeeprajPandey/8b906f4ca90ff9504222193076158542 to your computer and use it in GitHub Desktop.
Testimony: a generic python unit testing wrapper.

Testimony: Bearing witness to your code's quality

Welcome to Testimony, the tool that holds a mirror up to our code and provides a platform for it to testify to its own correctness. We all write code we're not proud of, tucked away in private repositories, hoping it will never see the light of day. But with Testimony, you can bring that code out of the shadows and into the light of public scrutiny.

Testimony provides a simple and intuitive interface for testing Python functions and verifying their outputs against a set of input-output pairs. Simply define your test cases in a JSON file, and Testimony will automatically run your code against each one and report any failures.

With Testimony, you can be confident that your code is functioning as expected and producing the correct outputs for a variety of inputs. No more second-guessing or relying on manual testing – Testimony has got you covered.

So what are you waiting for? Let Testimony be your guide to writing better, more reliable code, and let your code testify to its own correctness.

Getting Started

To run the test cases, you will need to create a virtual environment and install the dependencies from the requirements.txt file:

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Running the Test Cases

To run the test cases, use the following command:

python testimony.py -m module -s subroutine

where module is the name of the module file to pick functions from, and subroutine is the name of the function to test.

Creating Test Cases

To create new test cases, create a new JSON file named testcases_[subroutine].json and add test cases in the following format:

[
    {
        "id": "01",
        "input": [[0], [0]],
        "expected_output": [0]
    }
]

where id is a unique identifier for the test case, input is a list of arguments to pass to the function, and expected_output is the expected output of the function.

autopep8==2.0.2
colorama==0.4.6
pycodestyle==2.10.0
termcolor==2.3.0
tomli==2.0.1
[
{
"id": "01",
"input": [[0], [0]],
"expected_output": [0]
}
]
#!/usr/bin/env python3
import argparse
import importlib.util
import json
import sys
import unittest
import colorama
colorama.init()
GREEN = colorama.Fore.GREEN
RED = colorama.Fore.RED
YELLOW = colorama.Fore.YELLOW
RESET = colorama.Fore.RESET
class TestCases(unittest.TestCase):
pass
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-m", "--module", required=True,
help="Name of the question file to pick functions from")
parser.add_argument("-s", "--subroutine", required=True,
help="Name of the function to test")
args = parser.parse_args()
spec = importlib.util.spec_from_file_location(
args.subroutine, f"{args.module}.py")
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
function = getattr(module, args.subroutine)
with open(f"testcases_{args.subroutine}.json") as f:
test_data = json.load(f)
for data in test_data:
def test_method(self, data=data):
result = function(*data['input'])
expected = data['expected_output']
try:
self.assertEqual(result, expected)
print(f"\n{GREEN}PASS{RESET} {args.subroutine}_{data['id']}")
except AssertionError as e:
print(
f"\n{RED}FAIL{RESET} {args.subroutine}_{data['id']} - {str(e)}")
print(f"Expected: {expected}")
print(f"Actual: {result}")
print(f"{RED}{'-'*80}{RESET}")
test_name = f"test_{args.subroutine}_{data['id']}"
setattr(TestCases, test_name, test_method)
runner = unittest.TextTestRunner()
result = runner.run(unittest.makeSuite(TestCases))
exit_code = 0 if result.wasSuccessful() else 1
sys.exit(exit_code)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment