Modularity and testability

Designing software requires conscious effort; let's not stop thinking.

I've been writing blog posts covering a number of topics over the past few months; from the conflict between software architecture and code and architecturally-evident coding styles through to representing a software architecture model as code and how microservice architectures can easily turn into distributed big balls of mud. The common theme running throughout all of them is structure, and this in turn has a relationship with testability.

The TL;DR version of this post is: think about modularity, think about how you structure your code, think about the options you have for testing your code and stop making everything public.

1. The conflict between software architecture and code

I've recently been talking a lot about the disconnect between software architecture and code. George Fairbanks calls this the "model-code gap". It basically says that the abstractions we consider at the architecture level (components, services, modules, layers, etc) are often not explicitly reflected in the code. A major cause is that we don't have those concepts in OO programming languages such as Java, C#, etc. You can't do public component X in Java, for example.

2. The "unit testing is wasteful" thing

Hopefully, we've all see the "unit testing is wasteful" thing, and all of the follow-up discussion. The unfortunate thing about much of the discussion is that "unit testing" has been used interchangeably with "TDD". In my mind, the debate is about unit testing rather than TDD as a practice. I'm not a TDDer, but I do write automated tests. I mostly write tests afterwards. But sometimes I write them beforehand, particularly if I want to test-drive my implementation of something before integrating it. If TDD works for you, that's great. If not, don't worry about it. Just make sure that you *do* write some tests. :-)

There are, of course, a number of sides to the debate, but in TDD is dead. Long live testing. (ignore the title), DHH makes some good points about the numbers and types of tests that a system should have. To quote (strikethrough mine):

I think that's the direction we're heading. Less emphasis on unit tests, because we're no longer doing test-first as a design practice, and more emphasis on, yes, slow, system tests. (Which btw do not need to be so slow any more, thanks to advances in parallelization and cloud runner infrastructure).

The type of software system you're building will also have an impact on the number and types of tests. I once worked on a system where we had a huge number of integration tests, but very few unit tests, primarily because the system actually did very little aside from get data from a Microsoft Dynamics CRM system (via web services) and display it on some web pages. I've also worked on systems that were completely the opposite, with lots of complex business logic.

There's another implicit assumption in all of this ... what's the "unit" in "unit testing"? For many it's an isolated class, but for others the word "unit" can be used to represent anything from a single class through to an entire sub-system.

3. The microservices hype

Microservices is the new, shiny kid in town. There *are* many genuine benefits from adopting this style of architecture, but I do worry that we're simply going to end up building the next wave of distributed big balls of mud if we're not careful. Technologies like Spring Boot make creating and deploying microservices relatively straightforward, but the design thinking behind partitioning a software system into services is still as hard as it's ever been. This is why I've been using this slide in my recent talks.

If you can't build a structured monolith, what makes you think microservices is the answer!?


Uncle Bob Martin posted Microservices and Jars last month, which touches upon the topic of building monolithic applications that do have a clean internal structure, by using the concept of separately deployable units (e.g. JARs, DLLs, etc). Although he doesn't talk about the mechanisms needed to make this happen (e.g. plugin architectures, Java classloaders, etc), it's all achievable. I rarely see teams doing this though.

Structuring our code for modularity at the macro level, even in monolithic systems, provides a number of benefits, but it's a simple way to reduce the model-code gap. In other words, we structure our code to reflect the structural building blocks (e.g. components, services, modules) that we define at the architecture level. If there are "components" on the architecture diagrams, I want to see "components" in the code. This alignment of architecture and code has positive implications for explaining, understanding, maintaining, adapting and working with the system.

It's also about avoiding big balls of mud, and I want to do this by enforcing some useful boundaries in order to slice up my thousands of lines of code/classes into manageable chunks. Uncle Bob suggests that you can use JARs to do this. There are other modularity mechanisms available in Java too; including SPI, CDI and OSGi. But you don't even need a plugin architecture to build a structured monolith. Simply using the scoping modifiers built in to Java is sufficient to represent the concept of a lightweight in-process component/module.

Stop making everything public

We need to resist the temptation to make everything public though, because this is often why codebases turn into a sprawling mass of interconnected objects. I do wonder whether the keystrokes used to write public class are ingrained into our muscle memory as developers. As I said during my closing session at DevDay in Krakow last week, we should make a donation to charity every time we type public class without thinking about whether that class really needs to be public.

Donate to charity every time you type public class without thinking

A simple way to create a lightweight component/module in Java is to create a public interface and keep all of the implementation (one or more classes) package protected, ensuring there is only one "component" per package. Here's an example of a such a component, which also happens to be a Spring Bean. This isn't a silver bullet and there are trade-offs that I have consciously made (e.g. shared domain classes and utility code), but it does at least illustrate that all code doesn't need to be public. Proponents of DDD and ports & adapters may disagree with the naming I've used but, that aside, I do like the stronger sense of modularity that such an approach provides.


And now you have some options for writing automated tests. In this particular example, I've chosen to write automated tests that treat the component as a single thing; going through the component API to the database and back again. You can still do class-level testing too (inside the package), but only if it makes sense and provides value. You can also do TDD; both at the component API and the component implementation level. Treating your components/modules as black boxes results in a slightly different testing pyramid, in that it changes the balance of class and component tests.

Rethinking the testing pyramid?

A microservice architecture will likely push you down this route too, with a balanced mix of low-level class and higher-level service tests. Of course there is no "typical" shape for the testing pyramid; the type of system you're building will determine what it looks like. There are many options for building testable software, but neither unit testing or TDD are dead.

In summary, I'm looking for ways in which it we can structure our code for modularity at the macro-level, to avoid the big ball of mud and to shrink the model-code gap. I also want to be able to automatically draw some useful architecture diagrams based upon the code. We shouldn't blindly be making everything public and writing automated tests at the class level. After all, there are a number of different approaches that we can take for all of this, and the modularity you choose has an implication on the number and types of tests that you write. As I said at the start; think about modularity, think about how you structure your code, think about the options you have for testing your code and stop making everything public. Designing software requires conscious effort. Let's not stop thinking.

About the author

Simon is an independent consultant specializing in software architecture, and the author of Software Architecture for Developers (a developer-friendly guide to software architecture, technical leadership and the balance with agility). He’s also the creator of the C4 software architecture model and the founder of Structurizr, which is a collection of open source and commercial tooling to help software teams visualise, document and explore their software architecture.

You can find Simon on Twitter at @simonbrown ... see for information about his speaking schedule, videos from past conferences and software architecture training.

Re: Modularity and testability

I can empathize with you general message. However when I look at the details, I doubt, it will change much. Take this for example:

"It basically says that the abstractions we consider at the architecture level (components, services, modules, layers, etc) are often not explicitly reflected in the code."

Sure, where is the component, the layer etc. in code? How to systematically translate conceptual structures into code artifacts?

But then... What is a component, a service, a module, a layer anyway? Where´s the definition of these "abstractions"? How do they relate to each other? Does a component contain modules? Or is it the other way around? What is a component or service or module relevant for? Design time or runtime? What distinguishes a module from a service from a component? What´s the category these terms belong to? Or are some of them categories and other instances? When to choose which "abstraction"? When to break code up into two components or modules, when not?

That´s all questions which mostly go unanswered. And my guess is that´s also the reason why there are no programming language equivalents for those "abstractions", nor any clear translation rules.

Generation after generation of developers hears these terms being used by older developers. They get some kind of feeling for what they could be - but they actually never exchange their views. It stays nebulous at best.

So nothing will change as long as "modularization" is not put on a more systematic foundation. "Modularize!" has been the call for the past 40 years - and where are we with that? So something must be missing. And that´s not the consciousness about the importance of "modularization". It´s... well, I can´t help but say, it´s a clue as to how to go about it. A clear and simple method to lead from requirements to not only code delivering desired functionality, but also evolvability and quality.

Definitions of ubiquitous but fuzzy terms to me seems a prerequisite for that.


Re: Modularity and testability

Agreed ... the definition of "component", "module", etc aren't widely understood. I tend to use them as synonyms, but others apply very different semantics to each term. Software Architecture in Practice (Clements, Bass, Kazman) is one of the few resources that provides a definition for these terms, and they throw "component instance" into the mix too. And this is in chapter 1, if I recall correctly. The theory is nice, but applying it to real-world programming languages is another thing entirely. That, plus few developers seem to have read the book. How do we fix all of this? :-)

Re: Modularity and testability

Thx for the hint to "Software Architecture in Practice". I checked out the first chapter: You´re right, they provide some pretty clear definitions of Module and Component. And I also like their "Architectural Structures".


Even their approach stands on its head, so to speak. It starts with structural elements, not with requirements. My experience is, if you put an analysis of requirement categories first (even though that sounds theoretical to most devs ;-) it´s much easier to get devs (and managers) to jump onto the "design bandwaggon". Because suddenly they can feel why it´s so important to look beyond technological challenges like scalability or deployment. Without tying evolvability to money and making crystal clear how different software is from hardware (in the most general sense), nobody will really feel an urge to invest into design.

How to fix all of this? I´d say we need at least to be equally specific. We should not perpetuate the vagueness of terms. So if you have clear definitions for "component", "module", "service" and maybe some categories etc. don´t get tired to repeat them over and over again. And maybe map them to/contrast them with definitions of others.

And then simplification is needed. In the end "architecture" is a huge topic. No, not "architecture", because it also is just a means to an end, which is "health". Yes, we´re talking about "software health", which to me means "capability to perform as required - now and in the future". The most important part of this definition being "and in the future".

Most managers/devs focus on the "here and now" of "capability to perform as required". They bang out behavior: functionality plus quality (e.g. performance, security). They are addicted, they are seeking the next kick. It´s like working in an ER: someone gets rolled in, work on the person for an hour or three, another life saved. Great!

To fix a bug, to quickly implement some feature is so much more rewarding in several ways than doing design and think about evolvability. Nobody will pad you on the back for so quickly getting the evolvability right.

Plus, how do you do that thing with the evolvability anyway? How to do design "correctly" or at least straightforwardly? How to move systematically from requirements to classes, components, modules, services, layers and the like?

That´s nowhere taught. But we are in terrible need for a simple method for everyday work, for average programmers.

We need to work on that. We need to provide guidance. It´s a blank spot on the map of software development. OOP has not delivered on that, nor UML. Agility is oblivious of this. SOLID is not of much help. Layered architecture, Onion Architecture, Hexagonal or Clean Architecture: that´s all nice and well - but still too simple. They are leaving too many dots unconnected - and also don´t solve fundamental problems. (Which then technology providers are trying to connect, which is bad. Technology is a tool, not a value, principle, or end.)

Maybe we find some time at Software Architect 2014 to talk about this.

Re: Modularity and testability

+1 to the point about not making everything public. In Java the language has mechanisms to enforce this. In languages like python though, this has to be enforced through convention and discipline. I suppose the discipline piece is the most important thing anyways.

Add a comment Send a TrackBack