I've blogged about architecturally-aligned testing before, which essentially says that our automated tests should be aligned with the constructs that we use to describe a software system. I want to expand upon this topic and clarify a few things, but first...
In a nutshell, I think the terms "unit test" and "integration test" are ambiguous and open to interpretation. We tend to all have our own definition, but I've seen those definitions vary wildly, which again highlights that our industry often lacks a shared vocabulary. What is a "unit"? How big is a "unit"? And what does an "integration test" test the integration of? Are we integrating components? Or are we integrating our system with the database? The Wikipedia page for unit testing talks about individual classes or methods ... and that's good, so why don't we use a much more precise terminology instead that explicitly specifies the scope of the test?
I won't pretend to have all of the answers here, but aligning the names of the tests with the things they are testing seems to be a good starting point. And if you have a nicely defined way of describing your software system, the names for those tests fall into place really easily. Again, this comes back to creating that shared vocabulary - a ubiquitous language for describing the structures at different levels of abstraction within a software system.
Here are some example test definitions for a software system written in Java or C#, based upon my C4 model (a software system is made up of containers, containers contain components, and components are made up of classes).
Name | What? | Test scope? | Mocking, stubbing, etc? | Lines of production code tested per test case? | Speed |
---|---|---|---|---|---|
Class tests | Tests focused on individual classes, often by mocking out dependencies. These are typically referred to as “unit” tests. | A single class - one method at a time, with multiple test cases to test each path through the method if needed. | Yes; classes that represent service providers (time, random numbers, key generators, non-deterministic behaviour, etc), interactions with “external” services via inter-process communication (e.g. databases, messaging systems, file systems, other infrastructure services, etc ... the adapters in a “ports and adapters” architecture). | Tens | Very fast running; test cases typically execute against resources in memory. |
Component tests | Tests focused on components/services through their public interface. These are typically referred to as “integration” tests and include interactions with “external” services (e.g. databases, file systems, etc). See also ComponentTest on Martin Fowler's bliki. | A component or service - one operation at a time, with multiple test cases to test each path through the operation if needed. | Yes; other components. | Hundreds | Slightly slower; component tests may incur network hops to span containers (e.g. JVM to database). |
System tests | UI, API, functional and acceptance tests (“end-to-end” tests; from the outside-in). See also BroadStackTest on Martin Fowler's bliki. | A single scenario, use case, user story, feature, etc across a whole software system. | Not usually, although perhaps other systems if necessary. | Thousands | Slower still; a single test case may require an execution path through multiple containers (e.g. web browser to web server to database). |
This definition of the tests doesn't say anything about TDD, when you write your tests and how many tests you should write. That's all interesting, but it's independent to the topic we're discussing here. So then, do *you* have a good clear definition of what "unit testing" means? Do your colleagues at work agree? Can you say the same for "integration testing"? What do you think about aligning the names of the tests with the things they are testing? Thoughts?
Simon is an independent consultant specializing in software architecture, and the author of Software Architecture for Developers (a developer-friendly guide to software architecture, technical leadership and the balance with agility). He’s also the creator of the C4 software architecture model and the founder of Structurizr, which is a collection of open source and commercial tooling to help software teams visualise, document and explore their software architecture.
You can find Simon on Twitter at @simonbrown ... see simonbrown.je for information about his speaking schedule, videos from past conferences and software architecture training.