• Portretfoto van Coen van der Kamp
    Coen van der Kamp
  • Portretfoto van Kees Hink
    Kees Hink

Testing Django Applications (1)

Everyone agrees that automated tests are a good thing, but getting started can be a puzzle. In these blog posts we give practical advice, full of code examples, on how to write high quality tests for Django. This is part 1.

Introduction

Automated tests ensure that you don't accidentally break something. As a project gets bigger, tests are indispensable to ensure that everything keeps working.

But starting to write tests is not always easy. Simply reading a tutorial and searching the web might leave you with a lot of questions, or lead you down a path you might regret later.

In this blog post we summarize how we write tests for Django in 2024. Note that this is an opinionated take. But at least, you'll get just one opinion here, instead of the many you'll come across when randomly searching the internet. We'll try to explain why we do it like we do.

This blog started as a talk we gave at PyGrunn 2024. We felt the topic deserves a proper writing down.

Our perspective

As a team of developers, we don't want to rely on information in someone's head to know if something will break. So for years, we've been testing all aspects of our applications, from template rendering to FTP connections.

Also, we often change members in our teams. So when one of us works on something and a test fails, it's likely that that test was written by someone else. It's important to not only signal that something just got broken, but to also help your colleague (or future self) understand why.

This is why we have years of experience in writing meaningful, readable tests.

In the beginner's mind there are many possibilities, but in the expert's there are few.
- Shunryu Suzuki

Outline

There is too much to cover for a single blog post. In this one, we have these chapters:

  1. Pythons unittest and TestCase
  2. Django's DjangoTestCase
  3. Testing a view with DjangoTestClient
  4. The pytest runner and some of its plugins
  5. Use a separate settings file for testing

In the next blog post after this one, we'll also cover these topics:

  • Setting up testing content
  • Testing form submissions
  • Isolating code for testing
  • Testing APIs
  • Email, parametrization, counting queries, changing current time
  • Marking tests
  • Writing readable tests
  • Rules of thumb

And after that, who knows?

About our examples

To keep our code examples short and to the point, we:

  • omitted docstrings
  • didn't use type hints
  • decreased line width

These are not recommendations.

Python unittest

We'll start from the very beginning: The unittestlibrary, which comes standard with Python.

This has nothing to do with Django yet. We introduce it because Django's test framework uses this framework, so a basic understanding is very useful.

Side note: "Unit Test" vs. "Integration Test"

Because naming things is important, we expand a little on the library's name unittest.

There are different types of tests. Some common names are:

  • Unit tests, where you test just one method or function
  • Integration tests, where you test how some parts of code work together
  • User Interaction tests, where you test from a user perspective

A nice article describing many types of testing is on the bitecode blog.

"unittest" is the name of the test framework, but "Unit Test" is the category name for tests that test a single unit of code. You can write integration tests with the unittest framework, so maybe the name isn't entirely correct.

As unittest is a very old and respectable part of the standard library, let's forgive it for this, and move on.

Simple function and test

Here we have a simple Python function to add two values.

It takes two arguments and returns the sum of them.

def add(a, b):
    return a + b

With unittest we can test the add function.

A test class allows you to group tests that belong to each other.

# test_add.py

import unittest

class TestAdd(unittest.TestCase):

    def test_add(self):
        self.assertEqual(add(4, 2), 6)

We can run the test as follows:

$ python -m unittest
.
-------------------------------------------------------------
Ran 1 test in 0.000s

OK

-m is a flag to run a module (here: unittest) as a script.

All files that start with test_ are considered test modules. These tests will be executed. That's why we called our file test_add.py.

A successful test is presented as a dot. We see a summary of the test run: "Ran 1 test" and "OK".

When the test process terminates, and all tests are a success, it returns an exit code of 0.

Failure

Now let's say we changed the add method to this:

def add(a, b):
    return 10 * a + b

Then, on running the test, we get an F, with a traceback:

$ python -m unittest
F
=============================================================
FAIL: test_add (__main__.TestAdd)
-------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/coen/test_add.py", line 10, in test_add
    self.assertEqual(add(4, 2), 6)
AssertionError: 42 != 6

--------------------------------------------------------------
Ran 1 test in 0.000s

FAILED (failures=1)

The process terminates with an exit code which is not 0. This means that if other commands were planned to be executed, these will now be cancelled.

When running tests on a CI/CD pipeline, it will cause this pipeline to fail. This will usually mean that your code can not be merged on Gitlab, Github or Bitbucket. We will discuss pipelines in more detail later.

The setUp and tearDown methods

Tests have setUp and tearDown methods that are run before and after each test. You can use these to create test data that is needed for each test within the class.

For example, you can create a temporary directory in setUp, write files to it in tests, and remove the directory in tearDown.

import unittest

class TestAdd(unittest.TestCase):

    def setUp(self):
        ...

    def tearDown(self):
        ...

Unit testing with Django

This chapter is a summary of Django's excellent tutorial page.

To write a similar, Django-specific test, we use Django's TestCase. It extends unittest.TestCase, so Django uses the unittest framework under the hood.

Django's TestCase sets up a test database, which it will remove after each test. So even if you create and save an object, it will not persist after the test finishes.

Here we test that when we create a new user, the user is active:

from django.test import TestCase

class UserTests(TestCase):

    def test_user_is_active(self):
        user = User.objects.create_user(username="test")
        self.assertTrue(user.is_active())

Note that instead of running python -m unittest , you'd call:

python manage.py test

TestCase's subclass SimpleTestCase provides some other Django-specific extras:

I guess that makes TestCase simpler than SimpleTestCase. Badumtss!

Testing a view

Now let's test a view. This is more of an "Integration Test" than a "Unit Test", because rendering a view uses several of Django's parts: routing, finding instances of a model, rendering a template, to name but a few.

For this, we use DjangoTestClient. Django's TestCase has an attribute client. You can think of this test client as a very simple browser-like thing.

Let's assume we have a polls app, whose index view lists all polls in the app. Here we test that when there is nothing to show, a text saying so is displayed:

class PollsIndexViewTests(TestCase):
    def test_no_polls(self):
        response = self.client.get(reverse("polls:index"))
        self.assertEqual(response.status_code, 200)
        self.assertQuerySetEqual(
            response.context["polls"], []
        )
        self.assertContains(
            response, "No polls are available."
        )

As you see, we test two other things as well:

  1. We check that the page renders without error ( status_code == 200).
  2. We check that the template received a context containing polls (probably a QuerySet), which is empty. So the client is smarter than a browser in this respect, because a browser wouldn't know anything about context.

Testing as a specific user

Often different users should see different things. So Django made it easy to test a view as a certain user, with the force_login() method.

Let's say we have a view called secret which should be accessible only to superusers. We can test that as follows:

class AdminViewTests(TestCase):
    def test_admin_has_access(self):
        admin = User.objects.create_user(is_superuser=True)
        self.client.force_login(admin)
        response = self.client.get(reverse("polls:secret"))
        self.assertEqual(response.status_code, 200)

So far, we're still in the realm of Django's testing tools, without any additional libraries. Wrapping up this section, we'd like to mention the Django tutorial part 5 and Django's topic page about testing tools as a starting point. Chances are, you will find answers to a lot of your questions if you thoroughly read these first.

Now, we continue where the tutorial leaves off. We'll show you ways to write tests that are easier to read and faster to to set up. And we'll show you how to test just about anything.

The pytest runner

This is where this blog post gets opinionated. The testing section of the Django docs does not mention pytest. (Fun fact: it's mentioned only once, in the howto/upgrade-version section.)

We like pytest better, because it's easier to read. Compare this (Django TestCase):

class TestAdd(unittest.TestCase):

    def test_add(self):
        self.assertEqual(add(4, 2), 6)

to this (pytest):

def test_add():
    assert add(4, 2) == 6

As you see, we don't need a class. The function-based syntax makes it easier to read.

Also, the assert statement is much clearer than self.assertEqual.

If you're wondering how to set up testing content without a setUp method: We'll get to that soon.

It's worth noting that class-based tests are also possible with pytest, and you can in fact run all your unitttest.TestCase and django.TestCase tests with it.

There might be situations where setting up some very complicated testing content for a specific test class might legitimize the use of a Django TestCase, but we haven't found any yet.

pytest is a mature and healthy project, and we recommend using it without reservation.

Pytest fixtures

A pytest fixture is something that can be made available to pytest methods as an argument.

This should not be confused with Django's fixtures, which are a way to store a fixed set of testing content, for example as JSON files. We would argue that you forget the existence of Django fixtures as soon as possible, we'll explain later. But we have to mention them at this point to avoid confusion.

As an example of a pytest fixture, let's look at caplog: Here we test that when we call the add function, a specific string is logged:

import logging 

def test_add_logging(caplog):
    with caplog.at_level(logging.INFO):
        add(4, 2)
    assert "Adding 4 and 2" in caplog.text

This example is just to show a pytest fixture in action. But zooming in on caplog specifically: you can also do this with the unittest framework using assertLogs.

Pytest marks

Pytest allows you to mark individual tests as having a certain property.

For example, pytest.mark.xfail(reason="Test for old backend, TODO remove when transition complete") will make the entire test run not fail if that test fails.

For a complete list of pytest marks, see the docs.

Pytest plugins

Pytest allows for plugins. These plugins can provide various extra features (through fixtures), so you'll see a lot of them in these blog posts. The first one is the most important one:

pytest-django

pytest-django gives us Django-specific fixtures.

In the case below, the admin_user fixture gives us a User object that is a superuser.

def test_admin_user_is_superuser(admin_user):
    assert admin_user.is_superuser is True

There are other fixtures to help with various aspects, we'll discuss some of them later.

Custom pytest fixtures

We can define our own fixtures for pytest to use. Let's say we have an app that centers around an Order model, and we want to test that the homepage shows how many orders you have:

# tests/test_views.py

def test_homepage_order_count(client, user_with_order):
    client.force_login(user_with_orders)
    homepage = client.get(reverse("index"))
    assert "You have 1 order" in response.content.decode()

The client fixture here is Django's test client.

The user_with_order fixture comes from a file called conftest.pythat we created next to our test file. Fixtures in that file are automatically discovered.

# tests/conftest.py
import pytest 

@pytest.fixture
def user_with_orders():
    user = User.objects.create(username="joe")
    order = Order.objects.create(user=user)
    return user

In reality, we would not use .objects.create(), but we'll get to that in part 2 when we discuss factories.

Pytest options

Some helpful pytest options:

  • pytest --pdb : Drops you into debugger shell on failure, so you can inspect what your variables contain.
  • pytest -x : Stop on the first failure, so you can fix that before running all other tests.
  • pytest --lf : Re-run tests that failed on last run, because it's quicker than running all tests.
  • pytest -s : Required to make breakpoints (import pdb; pdb.set_trace()) work in a test.

Other runners

There used to be another test runner called nose. It's dead now.

We recommend you use either pytest (preferred) or the Django test runner.

Use a separate settings file for testing

Probably your project already has a settings/development.py and settings/production.py.

We recommend adding settings/testing.py, where you do the following:

  • Set dummy backends for things like caching
  • Set dummy URLs for external services, so they're never called from tests
  • Disable everything you don't absolutely need, to make tests run as fast as possible

Then, you run your tests with DJANGO_SETTINGS_MODULE = settings.testing (or configure this in pyproject.toml, setup.cfg or pytest.ini ).

End of part 1

We hope we've given a good starting position for writing tests.

Join us next time, when we'll dive into setting up testing content, testing form submissions, mocking, and much more.

Happy testing!

We love code