unittest framework to run Python tests. Some system tests consist of a series of steps for a given scenario, and they need to run in a specific order.
For example, I have a client application which performs offsite backups. System tests run on a live API in order to check high-level regressions in the code of the client which made it somehow incompatible with the API. The steps look like that:
- Perform the original off-site backup of a directory.
- Add a file to the directory. Perform a new backup, but simulate a situation where the API crashed in the middle.
- Launch backup again. Ensure the upload of the file from the previous step is resumed.
- Move a file locally. Perform a backup. Check that everything works as expected.
- Delete a file locally. Perform a backup, and while doing so, crash intentionally to simulate a situation where the file was deleted remotely, but the local client doesn’t know that.
- Perform a backup again. The client will ask the API to delete a file which was already deleted, receive a HTTP 404, and is expected to handle it correctly.
- Recover a specific file.
- Recover the whole directory.
Here, the order is crucial: I can’t test the recovery step, if I haven’t made the backup in the first place. And I can’t test the third step before performing the second one, because I cannot resume an upload which never started in the first place.
In order to determine the order, I prefix the names of the tests with a number:
def test_07_recover_file(self): ... def test_08_recover_everything(self): ...
This creates a problem when I need to insert a test in the middle of the chain. Instead of just adding the test, I need to add it first, and then go and change the names of all successive tests.
I thought about using for each test a decorator with the names of the tests which should run before it—a bit like dependency management, where every package can tell that it needs some other packages to be installed. I’m afraid, however, that such approach would be difficult to understand (and require a custom test runner).
Is there a more elegant approach to that?