How to Write Unit Tests in Python a Backend Developer's Guide

Written by Brendon
17 March 2026

Having a solid test suite is a non negotiable for any serious code base. Learn how to write unit tests in Python.

Python snake and shield

Writing unit tests in Python is all about having a conversation with your code. You use a framework like pytest or unittest to write small functions that check if your code does what you think it does. At its core, it's simple: you write a test that calls a piece of your application and asserts that the result is exactly what you expect. This isn't just a nice-to-have; it's a fundamental part of building backend systems that don't crumble under pressure.

Why Python Unit Testing Is Your Backend Superpower #

Before you write a single line of test code, you need to understand why this discipline is non-negotiable for any serious backend developer. Unit testing isn’t just some box you check off for your PR. It’s a practice that directly shapes your code quality, your development speed, and—most importantly—your confidence as an engineer.

For anyone trying to break into the field, mastering testing is a massive signal to employers that you build professional, maintainable software. You can learn more about what it takes in our comprehensive guide on how to become a backend developer.

The benefits are real and you feel them almost immediately:

  • Drastically Reduced Debugging Time: Catching a bug early is exponentially cheaper and faster than fixing it after it’s already caused chaos in production. A solid test suite is your first and best line of defense.
  • Improved Code Design: You can't easily test a giant, messy function. The act of writing tests forces you to build smaller, more modular, and loosely-coupled code, which naturally leads to a cleaner architecture.
  • Safe Refactoring: Need to clean up some code or upgrade a dependency? A good test suite is your safety net. It gives you the freedom to make changes, knowing that if you accidentally break something, a failing test will scream at you instantly.

Choosing Your Framework: Pytest vs Unittest #

When it comes to unit testing in Python, you'll run into two main players: unittest and pytest. While unittest is baked into Python's standard library, pytest has pretty much become the go-to standard for modern projects because it's just simpler and more powerful.

The numbers don't lie. Python is a dominant force, with 73% of professional developers using it. Among that group, a staggering 68% report that solid unit testing cuts their debugging time by more than 50%. This whole culture started with unittest, introduced way back in 2001, which laid the foundation for the robust testing practices we rely on today.

So, which one should you pick? Here's a quick rundown to help you decide.

Pytest vs Unittest at a Glance #

Feature pytest unittest
Syntax Simple assert statements Requires self.assertEqual(), self.assertTrue(), etc.
Boilerplate Minimal; just write test functions Requires classes that inherit from unittest.TestCase
Fixtures Powerful, modular, and reusable setup/teardown functions Basic setUp() and tearDown() methods
Ecosystem Huge ecosystem of plugins (e.g., pytest-django, pytest-asyncio) Limited; part of the standard library
Test Discovery Automatic; finds test_*.py or *_test.py files Requires specific naming conventions and structure
Learning Curve Very easy to get started with Steeper, more verbose, and more rigid

Ultimately, while both frameworks get the job done, pytest lets you write cleaner, more readable, and more powerful tests with less effort.

Key Takeaway: While unittest is built-in and perfectly capable, pytest offers a much more concise syntax and a richer ecosystem of plugins. For most new projects, it's the smarter choice. Its simplicity means you'll spend less time writing boilerplate and more time writing tests that actually matter.

Setting Up Your Python Testing Environment #

Before we even think about writing a single test, we need to get our house in order. A sloppy setup leads to a sloppy testing process. Trust me. Getting the environment right from the start saves you from a world of pain down the line, especially when working on real-world backend projects.

The absolute first step is to create a virtual environment. This is non-negotiable. It creates an isolated sandbox for your project, so the packages you install for one project don't clash with another. It’s a habit every professional developer swears by.

Just navigate to your project's root folder and run one of these commands:

For Mac/Linux #

python3 -m venv venv
source venv/bin/activate

For Windows #

python -m venv venv
.\venv\Scripts\activate

You’ll know it’s working when you see (venv) prepended to your terminal prompt. From now on, any package we install stays right here in this environment.

Installing Your Core Testing Tools #

With our virtual environment active, it’s time to bring in the tools of the trade. We’ll use pip, Python's package installer, to grab two essential libraries: pytest and coverage.py.

Pytest is the de facto standard for testing in Python. It’s powerful, easy to use, and makes writing tests a breeze. Coverage.py measures how much of your code is actually being run by your tests, which is crucial for spotting untested logic.

In your terminal, run this simple command:

pip install pytest coverage

This single line downloads and installs both packages into our clean virtual environment. We'll be using pytest to write and run tests and coverage to find out where our blind spots are.

Pro Tip: Always pin your dependencies. After you install your packages, run pip freeze > requirements.txt. This saves the exact versions of everything you're using, making your build 100% repeatable for you or anyone else on your team.

Structuring Your Project for Success #

Code organization isn't just about being neat; it's about making your project understandable and scalable. A messy folder structure is a nightmare to navigate and makes finding anything a chore. We're going to use a standard, battle-tested layout that you'll see in professional projects everywhere.

The convention is simple: create a tests/ directory right inside your project's root folder. This is where all your test files will live. Pytest is smart enough to automatically discover and run any file in this directory that starts with test_ or ends with _test.py.

A typical backend project structure would look something like this:

my_api_project/
├── venv/
├── my_api/
│ ├── **init**.py
│ ├── utils.py
│ └── models.py
├── tests/
│ ├── **init**.py
│ └── test_utils.py
└── requirements.txt

See how clean that is? Your application code (my_api/) is completely separate from your test code (tests/). This is a fundamental best practice that makes maintaining your codebase so much easier.

Finally, let’s add a tiny configuration file to make our lives a bit easier. Create a file named pytest.ini in your project's root directory. This file lets us customize how pytest behaves.

For now, we'll just add a couple of basic options to make the test output cleaner and tell it where our source code is.

[pytest]
addopts = -ra -q
testpaths =
tests

The addopts = -ra -q bit tells pytest to give us a short summary for all test outcomes except for the ones that pass. It keeps the output nice and tidy, focusing your attention on what's broken.

And that's it. Your environment is now set up like a pro's. You're ready to start writing some actual tests.

Writing Your First Python Unit Tests with Pytest #

Alright, with our project set up and our virtual environment ready, it’s time for the fun part: actually writing some tests. We're going to see firsthand why so many developers prefer pytest for its simplicity and power.

I’m skipping the classic add(2, 2) examples you see everywhere. They don't teach you much. Instead, we’ll jump right into a scenario you’d find in any real backend application.

Let's say we have a small utility function living in my_api/utils.py. Its only job is to check if a string is a valid email address before we try to create a new user or process a form. This kind of input validation is non-negotiable for building stable systems, a fundamental concept you'll see emphasized in REST API design best practices.

Here’s the function we want to test:

my_api/utils.py

import re

def is_valid_email(email: str) -> bool:
    """Validates if the provided string is a well-formed email address."""
    if not isinstance(email, str):
        return False # A simple regex for email validation
    pattern = r"^[a-zA-Z0-9*.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$"
    return re.match(pattern, email) is not None

It’s a straightforward function. It first checks if the input is even a string, then uses a regular expression to validate the format. Now, let's write some tests to prove it works exactly like we think it does.

Creating Your First Test File #

Based on the project structure we set up earlier, we'll create a new file: tests/test_utils.py. That test_ prefix is non-negotiable; it's how pytest automatically discovers which files contain tests it needs to run.

Inside this file, we’ll import our is_valid_email function and write a few test functions to check its behavior. Each test function will also start with test_, telling pytest that this is a callable test case.

Here’s what our first batch of tests looks like.

tests/test_utils.py

from my_api.utils import is_valid_email

def test_valid_email_is_accepted():
    """Verify that a standard, valid email passes validation."""
    assert is_valid_email("[email protected]") is True

def test_invalid_email_is_rejected():
    """Verify that an email without an '@' symbol fails."""
    assert is_valid_email("testexample.com") is False

def test_email_with_no_domain_is_rejected():
    """Verify that an email without a domain name fails."""
    assert is_valid_email("[email protected]") is False

def test_non_string_input_is_rejected():
    """Ensure that non-string inputs are handled gracefully."""
    assert is_valid_email(None) is False
    assert is_valid_email(12345) is False

Notice how clean and readable that is? There are no clunky self.assertTrue() or other boilerplate methods you might see with older frameworks. With pytest, you use a plain Python assert statement. If the condition after assert evaluates to False, the test fails, and pytest gives you a wonderfully detailed report.

Running Your Tests and Reading the Output #

With the test file saved, pop open your terminal in the project's root folder (the one with your pytest.ini file) and just run this simple command:

pytest

Thanks to our setup, pytest scans the project, finds the tests/ directory, discovers our test_utils.py file, and executes every function inside that starts with test_.

If everything is correct, you'll get some satisfying green output.

===================== test session starts =====================
...
collected 4 items

tests/test_utils.py .... [100%]

====================== 4 passed in 0.01s ======================

See those four dots (....)? Each one represents a passing test. That instant feedback is addictive and a huge confidence booster.

The core feedback loop when writing unit tests is simple but powerful: Write code -> Write a test -> Run the test -> See it pass. This tight cycle is what makes you so much more productive.

But what happens when something breaks? Let’s add a new, intentionally incorrect test to test_utils.py to see pytest's error reporting in action. Let's pretend we mistakenly believe an email without a top-level domain should be valid.

Add this failing test to tests/test_utils.py #

def test_email_without_tld_is_accepted():
    """This test is designed to fail to show pytest output."""
    assert is_valid_email("test@example") is True

Run pytest again. The output this time will be dramatically different. You'll get a big, red F and a detailed report showing you exactly which assert statement failed, what the values were, and where the failure occurred. This immediate, precise feedback helps you fix bugs in seconds, not hours.

Going Pro: Advanced Testing with Mocking and Fixtures #

Simple functions are easy to test. But let's be real, backend applications are never that simple. They're messy, interconnected systems that talk to databases, call external APIs, and depend on all sorts of other services.

So how do you test a piece of code that relies on something you don't control, like a third-party API or a live database? This is where you graduate from basic assertions and start using two of the most powerful tools in a developer's testing toolkit: fixtures and mocking.

These techniques are all about one thing: isolation. You want your tests to be fast, reliable, and completely independent of the outside world. Letting your tests make real network requests or hit a live database is a recipe for slow, flaky tests that fail for reasons that have nothing to do with your code. Instead, we create controlled, predictable stand-ins.

Stop Repeating Yourself: Managing Test Data with Pytest Fixtures #

Picture this: you have five different tests that all need a sample user object or a database connection to run. The naive approach is to create that object at the start of every single test function. It works, but your test files quickly become a cluttered, repetitive mess.

A much cleaner way is to use pytest fixtures.

A fixture is just a function that you define once and then reuse across any number of tests. Before your test runs, pytest executes the fixture and "injects" whatever it returns as an argument into your test function. It's perfect for setting up resources like pre-configured objects or temporary files—and even for cleaning them up afterward.

Let's create a fixture that provides a sample user dictionary.

tests/conftest.py

import pytest

@pytest.fixture
def sample_user_payload():
    """Provides a sample user data dictionary for tests."""
    return {
        "username": "testuser",
        "email": "[email protected]",
        "is_active": True
    }

By placing this in a special file named conftest.py inside your tests/ directory, pytest automatically makes the fixture available to all your tests. Now, any test that needs this data can just ask for it by name.

tests/test_user_logic.py

def test_user_activation_logic(sample_user_payload):
    """Ensure user activation status is correctly read from payload.""" # The 'sample_user_payload' is provided by our fixture!
    assert sample_user_payload["is_active"] is True

def test_username_extraction(sample_user_payload):
    """Verify username can be correctly extracted."""
    assert sample_user_payload["username"] == "testuser"

No more copy-pasting dictionaries. The code is instantly cleaner. And if you need to update the sample user, you only have to change it in one place. This is fundamental to writing tests that are easy to maintain.

Faking It: Isolating Code with Mocking #

Fixtures are great for providing data, but what about dependencies that perform an action, like calling an external API? If your function tries to fetch data from a weather service, you don't want your unit test to fail just because your Wi-Fi is out or the service is down.

The goal is to test your code, not the weather service. This is where mocking comes in.

Mocking is the practice of replacing a part of your system—like an external dependency—with a fake object that you completely control. For this, we can use Python's built-in unittest.mock library, which integrates seamlessly with pytest.

Let's say we have a function that gets the current temperature by calling a fictional weather API.

my_api/weather.py

import requests

def get_current_temperature(city: str):
    """Fetches the current temperature for a city from an external API."""
    response = requests.get(f"https://api.weather.com/v1/current?city={city}")
    response.raise_for_status() # Raise an exception for bad status codes
    return response.json()["temperature"]

To test this function properly, we need to prevent it from making a real network request. We can use unittest.mock.patch to intercept the requests.get call and force it to return a response of our choosing.

tests/test_weather.py

from unittest.mock import patch
from my_api.weather import get_current_temperature

@patch('my_api.weather.requests.get')
def test_get_current_temperature(mock_get):
    """
    Test temperature retrieval by mocking the requests.get call.
    """ # Configure the mock object to behave like a real response
    mock_response = mock_get.return_value
    mock_response.status_code = 200
    mock_response.json.return_value = {"temperature": 72}

    # Call our function
    temperature = get_current_temperature("New York")

    # Assert our function behaves correctly
    assert temperature == 72

    # We can even check that our code called the external API correctly!
    mock_get.assert_called_once_with("https://api.weather.com/v1/current?city=New York")

The Power of Isolation: By mocking requests.get, we’ve completely detached our get_current_temperature function from the outside world. Our test is now blazing fast and 100% reliable because it has zero external dependencies. We are testing our logic, and only our logic.

These are the kinds of techniques professionals use daily. A 2026 PractiTest analysis of 16 top unit testing tools found that teams using pytest had 40% faster test execution on large projects compared to unittest. For backend engineers here at Codeling, learning to write tests in python this way is how we catch errors early and hit code coverage metrics of 90% or more—a standard on top-tier projects.

Measuring Success with Code Coverage and CI #

So you've written a bunch of tests. You're feeling confident. But how can you be sure you've tested everything that matters? What parts of your code are still flying blind, with no safety net?

This is exactly where code coverage comes in. It's a tool that answers one simple, crucial question: "Which lines of my code did my tests actually run?"

Think of it as a diagnostic tool, not a report card on your skills. A low coverage score isn't a failure—it's a treasure map showing you the dark corners of your application that need attention. The goal isn't always 100% coverage, but rather to make sure every critical piece of your business logic is actually being tested.

Generating Your First Coverage Report #

We already installed the coverage.py library, and it plays beautifully with pytest. Getting a report is surprisingly simple. Instead of just running pytest directly, you run it through the coverage command.

From the root of your project, just run this:

coverage run -m pytest

This runs your test suite exactly like before, but coverage is watching in the background, keeping track of every line of your application code that gets executed. Once it’s done, you can check out the results.

For a quick summary right in your terminal, run this command:

coverage report -m

You'll get a clean table showing each file, the number of statements, how many were missed, and your total coverage percentage. It even points out the specific line numbers you missed, giving you an instant to-do list.

Visualizing Gaps with an HTML Report #

The terminal report is great for a quick check, but for really understanding the gaps, nothing beats a visual HTML report. coverage.py can generate an interactive webpage that color-codes your source files, making it obvious what’s tested and what’s not.

To create it, just run:

coverage html

This will create a new htmlcov/ directory in your project. Pop open the index.html file in your browser, and you’ll see a beautiful line-by-line breakdown of your coverage. It's the easiest way to spot entire functions or conditional branches your tests never even touched.

This is often where the real work begins—you'll look at these reports, figure out what's missing, and then decide where to use fixtures for setup or mocks for isolation to build a truly solid test suite.

This cycle is key: you set up resources (fixtures), isolate external parts (mocks), and then run your checks (tests). The coverage report helps you refine this loop.

Automating Your Safety Net with Continuous Integration #

Running tests and coverage reports on your own machine is good practice, but the real magic happens when you automate it. Continuous Integration (CI) is the process of automatically running your tests every single time code gets pushed to your repository. It acts as a gatekeeper, ensuring that broken code never makes it into your main branch.

GitHub Actions provides a straightforward and powerful way to set up a CI pipeline. All you need to do is add a YAML file to a .github/workflows/ directory in your project.

Here’s a basic workflow file—we can call it ci.yml—to get you started:

.github/workflows/ci.yml

name: Python CI

on: [push, pull_request]

jobs:
build:
runs-on: ubuntu-latest
steps: - uses: actions/checkout@v3 - name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11' - name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt - name: Run tests with coverage
run: |
coverage run -m pytest
coverage report

Once this file is in your repository, GitHub will automatically spin up a clean environment, install your project’s dependencies, and run your full test suite on every push. If a single test fails, the commit is flagged, stopping a bad merge in its tracks.

This automated feedback loop is the bedrock of modern software development. It enforces quality, gives the whole team confidence, and lets you ship features faster without constantly worrying about breaking something.

The numbers back this up. Stack Overflow insights connect Python's massive 73% developer adoption rate to its powerful testing ecosystem. Projects that hit over 80% code coverage tend to see 37% fewer defects, which is a game-changer for database-driven APIs. Other studies show these well-tested projects suffer from 35% fewer bugs in production—leading to the kind of scalable, impressive apps that recruiters love to see. You can dig deeper into how top Python testing frameworks drive these results.

Best Practices and Common Pitfalls to Avoid #

Knowing how to write a unit test is one thing. Knowing how to write a good one is what separates a working test suite from a maintenance nightmare that everyone on your team hates.

To keep your tests valuable, a good rule of thumb is to follow the FIRST principles. It's a simple acronym that acts as a quality checklist for every test you write.

  • Fast: Your tests need to run in milliseconds. If the suite takes minutes, developers will stop running it locally. This completely defeats the purpose of having a rapid feedback loop.
  • Isolated: Each test has to stand on its own. It should run independently, in any order, without relying on the state left behind by another test.
  • Repeatable: A test must give you the same result every single time, no matter where it runs. If it passes on your machine but flakes out in the CI pipeline, it’s not repeatable.
  • Self-Validating: The test should clearly pass or fail on its own. No one should ever have to read a log file or manually check a value to see if the test worked.
  • Timely: Write tests alongside the code they’re meant to validate. Ideally, you write the test just before the production code, following a Test-Driven Development (TDD) workflow.

Avoiding Common Testing Traps #

Writing good tests is also about knowing what not to do. The single biggest mistake I see developers make is testing implementation details instead of behavior.

For example, don't write a test to assert that a specific private method was called inside a function. Who cares? What you should test is the public outcome. Does the function return the correct value? Does it have the right side effect? This makes your tests resilient. You can refactor the internals of a function all day long, and as long as the behavior remains the same, your tests will still pass. We go much deeper on this topic in our guide to testing behavior versus implementation.

Another pitfall is creating brittle tests. These are tests that are so tightly coupled to the code that they break at the slightest change, even if the change is completely unrelated. This often happens when tests rely on fragile things like the exact HTML structure of a page or hard-coded error messages.

A brittle test is worse than no test at all. It erodes trust in the test suite, leading developers to ignore legitimate failures because they're used to seeing false alarms.

Embrace Modern Testing Practices #

Finally, keep your test suite lean. It's easy to get obsessed with 100% coverage and start writing tests for every trivial line of code. Don't. Focus your energy on the complex business logic, the tricky edge cases, and the critical paths that absolutely must not break.

The world of testing is also evolving quickly. A 2026 report noted that AI-augmented tools can now automatically write unit tests covering 85% of edge cases in Python projects, a massive leap from just 40% back in 2020. This growth is powered by the incredible ecosystem around tools like pytest, which boasts over 1,300 plugins for just about everything. You can learn more about the latest trends in Python testing frameworks to see what's on the horizon.

By letting these tools handle the simple, boilerplate tests, you can free up your time to focus your human brain on the parts of your application that actually require deep thought.

As you get deeper into writing tests, a few questions tend to pop up again and again. Let's clear up some of the most common points of confusion so you can get back to writing solid, reliable tests.

What’s the Difference Between a Unit Test and an Integration Test? #

This is a classic question that trips up almost everyone at first, but the distinction is actually pretty simple once it clicks. Think of it this way: a unit test is all about isolation. You’re grabbing one tiny piece of your code—a single function, a specific method—and testing it completely on its own.

To make sure it’s truly isolated, you'll use tools like mocks to fake everything around it, like database calls or API requests. The goal is to prove that this one function does its job correctly, without any outside interference.

An integration test, on the other hand, is about teamwork. It checks how different parts of your system work together. Does your API endpoint correctly save data to the database? That’s an integration test because it involves your application code, the database driver, and the database itself all playing nicely.

How Much Code Coverage Should I Aim For? #

It's tempting to chase that 100% coverage metric, but honestly, it’s often a trap. You end up spending hours writing tests for trivial code, and the return on that effort drops off fast. For most projects, aiming for 80-90% coverage is a much healthier and more practical goal.

The number itself isn't the point. What matters is what you're testing. Your real priority should be making sure every critical piece of business logic, every complex conditional branch, and every error-handling path is locked down. Don't waste your time writing tests for simple getters and setters just to bump up a percentage.

Can I Use Pytest for an Existing Project That Uses Unittest? #

Yes, absolutely. This is one of the killer features of pytest and a big reason why so many teams switch over. It was built from the ground up to be compatible with existing unittest test suites.

You can install pytest in a project that's full of unittest classes and run your entire test suite immediately, no changes needed. pytest will find and execute them perfectly. This gives you a seamless migration path. You can start writing all your new tests with pytest's cleaner syntax and powerful fixtures, while gradually converting the old tests whenever you have the time. It’s the best of both worlds without the headache of a massive, all-at-once rewrite.