• Portretfoto van Kees Hink
    Kees Hink

Python Containers: Best Practices

Arguably the most popular talk at PyCon Italia 2025 was "Python Containers: Best Practices" by Daniel Hervás. The room was full, and people had to be sent away. Seeing the demand, we follow up on our previous blog posts about containers with some in-depth tech details.

Shipping containers by Martin Vorel
Photo by https://martinvorel.com/

In our previous (Dutch) blog posts we explained why we moved to Docker. In this post, we dive a little deeper.

We will repeat the key points from Daniel's talk and compare them to our current setup.

After all, when about a hundred people choose to attend the first talk on a Saturday morning, there must be something in it.

Talk outline

I will briefly outline Daniel's talk here.

First, he states that https://github.com/jsmitka/examples-python-images-production is an excellent starting point. He says to not trust the internet in general on this subject, medium.com even less, and AI not at all.

He suggests to use the USER directive to run your app with an unprivileged user: The principle of least privilege is always a good one. That a container is isolated doesn't mean it's bulletproof: information can be leaked, the supply chain can be attacked.

He suggests Podman for rootless containers.

As for image size, smaller is better. Use the smallest container that does the job, as it speeds up the build process. python-slim is bigger than python-alpine, but he prefers slim.

Apply software updates on the base image. This increases build times and means loss of idempotency. But vulnerabilities are updated, 20+ important ones in one week.

  • ENV PYTHONDONTWRITEBYTECODE 1: Not worth it, don't use what you don't understand
  • Prefer COPY over ADD
  • ENV PIP_NO_CACHE_DIR=false saves space
  • as does rm -rf /var/lib/apt/lists/*
  • ENV PYTHONUNBUFFERED=1 If you wonder why your logs don't come up in time, the answer is here.

Audience questions

Some of the questions from the audience were:

  • Do we include test dependencies in production image builds? Answer: No we don't.
  • What does he think of uv's pre-built Python images? Answer: No opinion.

Our setup

And now for our setup, do we do it the same way? When we differ, what reason do we have? Do we have things to add that might be useful for a larger audience?

Our environment and workflow

We are an agency, and as such we have many different projects that we actively develop. We have even more that we actively maintain.

Most of our projects have some kind of Javascript build system. For development, our Dockerfiles define a frontend image.

We want to keep the project setup as similar as possible over all of our projects. Therefor, it can happen that we define stuff in our Dockerfiles that we only use in some other projects. Having a generic setup is important to us.

Our projects are hosted on a hosting provider who created a setup specifically for us. We will not cover this part of our setup here.

The following steps are automated using Gitlab pipelines:

  • Whenever we create a merge request, an image is built and tests are run against it.
  • After merge, another image is built and tests are run again.
  • When a Release Candidate is tagged, that same image is deployed to Acceptance.
  • When a regular tag is created, that same image is deployed to production.

Django base image

We have a registry containing our base Django image, which all of our projects extend. Having a base image allows all projects to benefit from the goodness we put in there. It's DRY (Don't Repeat Yourself) for our container best practices.

This image currently does the following:

  • Use python-slim-bookworm
  • Add unprivileged app user and directory
  • Upgrade packages
  • Install handy utilities like make, curl, nano and bash-completions
  • Install a cron job runner. This is one of these things that we keep around for older projects. We might remove it at some point. Newer projects use Celery.
  • Set a HEALTHCHECK
  • Create and enable a Python virtual env
  • Create a container for running test (see below)
  • ENV PYTHONDONTWRITEBYTECODE=1
  • ENV PYTHONUNBUFFERED=1
  • ENV PIP_NO_CACHE_DIR=yes
  • Read environment variables like Django's SECRET_KEY from files

Testing container

The testing container has a run script that starts pytest with the -e switch, so it aborts after one failure. This saves valuable compute minutes in our pipelines.

Tests are run with the -X dev switch.

Project image

In our project's Dockerfile, we:

  • Define a frontend container
  • Define a base image using our django-base-image
  • Define a development and production image using the base image

The base image sets the COMMIT_HASH environment variable which we use in the name of the image that we build. It also sets the environment variables for the Celery app and its beat schedule.

The development container is used for running tests in, and as such uses the "testing" Django settings.

The production container uses the "production" Django settings.

Discussion

Agreements

There are many points where we agree with the talk. Some of them are:

  • Use the slim base image
  • Limit user privileges
  • Upgrade packages
  • PYTHONUNBUFFERED=1

Extensions

Using a custom base image (in our case called "django-base-image") helps to keep as much configuration as possible out of the project itself.

This is handy when you have many projects, like we do. When you only maintain one project, it's probably not worth it.

Differences

There are some areas where we deviate from the setup proposed by Daniel:

  • ENV PYTHONDONTWRITEBYTECODE=1 We do this because it saves some time, though granted it's a minimal optimization.
  • ENV PIP_NO_CACHE_DIR=yes We don't use a pip cache because we only install requirements once. It would increase the image size.
  • Include testing dependencies in images.

Including testing dependencies in our production image is an interesting difference, and a choice we made after careful deliberation. It has some advantages that matter to us:

  • We only have to build one image. This saves time, at the expense of a slightly larger image.
  • We have only one requirements.txt file. This ensures that we test against the same version that goes to production.

The main drawback is that we have to be aware about security issues in testing dependencies. This problem is sufficiently mitigated, because we always run some kind of security step (currently pip-audit) as part of our tests.

Conclusion

Probably there are as many ways to implement containers as there are projects. We share some of our setup not to convince you to do it our way. Instead, we want to give you ideas, and maybe you will find them useful.

We love code