Python Friday #50: Speed up Pytest With Markers

The bigger our test suite gets, the longer it takes to run all tests. With markers we get a nice way to tell pytest which test it should run and which not.

This post is part of my journey to learn Python. You can find the other parts of this series here.

 

What markers come with pytest?

You can get the whole list of markers for pytest in the official documentation or with this command:

@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings

@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason=”no way of currently testing this”) skips the test.

@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif(‘sys.platform == “win32″‘) skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html

@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value.

 

Skip a test

A helpful marker is skip, that allows us to ignore a test. We can add the skip marker/decorator @pytest.mark.skip() on our test function and pytest will ignore it from now on. It is a good practice to add the reason parameter with an explanation why this test is skipped:

If we nor run our tests, the skipped tests get an s instead of a . in the test report:

===================== test session starts ======================
platform win32 — Python 3.8.1, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: D:\Python
plugins: cov-2.10.1, html-2.1.1, metadata-1.10.0
collected 2 items

test_own_fixture.py s.

 

Expect a test to fail

Sometimes we want a test to fail. For example, we found a bug that needs a workaround and we want to get an alarm as soon as this workaround is no longer needed. We can mark such tests with the pytest.mark.xfail decorator:

Pytest reports tests that we expect to fail with an x:

===================== test session starts ======================
platform win32 — Python 3.8.1, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: D:\Python
plugins: cov-2.10.1, html-2.1.1, metadata-1.10.0
collected 3 items

test_own_fixture.py ..x

The strict parameter makes sure that our test suite fails should by any reason the test start to pass. Otherwise you still get an x as the output and you may miss the fact that this bug is now fixed.

 

Create your own markers

When we want to run only a subset of tests we can create our own markers. We can add whatever name we want after @pytest.mark. to create our own marker. To create a wip marker, we can add @pytest.mark.wip to our test:

We now can run all tests that have this wip marker by calling pytest with an option -m “wip” (or -m “not wip” to run everything except wip):

===================== test session starts ======================
platform win32 — Python 3.8.1, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: D:\Python
plugins: cov-2.10.1, html-2.1.1, metadata-1.10.0
collected 12 items / 11 deselected / 1 selected

test_own_fixture.py . [100%]

================== warnings summary ==================
test_own_fixture.py:38
D:\Python\test_own_fixture.py:38: PytestUnknownMarkWarning: Unknown pytest.mark.wip – is this a typo? You can register custom marks to avoid this warning – for details, see https://docs.pytest.org/en/latest/mark.html
@pytest.mark.wip

— Docs: https://docs.pytest.org/en/latest/warnings.html
========= 1 passed, 11 deselected, 1 warning in 0.09s ==========

In my example pytest found 12 test and skipped 11 tests because they did not have the wip marker. However, as the warning shows there is a little problem with the wip marker: pytest does not know if we meant wip or if we made a typo.

 

Make sure you use the right markers

The flexibility of creating markers on the fly has a big downside when you work in a team: You do not know which markers exist and how you write them. Is it wip or WorkInProgress? A simple way to solve this problem is to register your markers with pytest. For that you create a file called pytest.ini in the root folder of your project. In the markers section you can list all your markers:

If your run pytest now with the -m “wip” option, you will not get any warnings:

===================== test session starts ======================
platform win32 — Python 3.8.1, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: D:\Python, inifile: pytest.ini
plugins: cov-2.10.1, html-2.1.1, metadata-1.10.0
collected 12 items / 11 deselected / 1 selected

test_own_fixture.py . [100%]

=============== 1 passed, 11 deselected in 0.07s ===============

This in only half of the benefits you get. If you now try to use a marker that is not registered (like sloooow) , pytest will immediately inform you about your mistake and name the file in which you used the wrong marker:

=================== test session starts ====================
platform win32 — Python 3.8.1, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: D:\Python, inifile: pytest.ini
plugins: cov-2.10.1, html-2.1.1, metadata-1.10.0
collected 9 items / 1 error / 8 selected

====================== ERRORS ==========================
_____________ ERROR collecting test_own_fixture.py _____________
‘sloooow’ not found in markers configuration option
================= short test summary info ==================
ERROR test_own_fixture.py
!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!
=================== 1 error in 0.18s ===================

As with fixtures before, your markers with their description show now up in the –markers list:

@pytest.mark.wip: Tests for features we currently work on,

@pytest.mark.slow: Mark slow tests to ignore them for development

 

Next

This knowledge of markers allows us to run a specific subset of our tests and get faster feedback. Next week I explain the parametrize marker that can help us to write less tests.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.