In my The Frustrated Architect presentation at GOTO Aarhus in October*, I talked about how there are a number of "classic" software design techniques from the pre-agile era that are being used less and less. For example, things like UML, class-responsibility-collaboration cards and component-based design. This is a shame because some of these techniques can complement an agile way of working and would perhaps prevent some wheels from being reinvented. If people don't know about these techniques though, how will they adopt them? I'll come back to this shortly but, first, I was intrigued by this tweet from Uncle Bob a few weeks back.
I don't necessarily disagree with this statement, although I like to see a software architecture grounded in reality, and that includes technology choices. Another tweet from Uncle Bob...
Again ... maybe, maybe not. Surely if there are some key technology choices that need to be made, then they should be made, right? Finally, another tweet...
Hmmm, if I don't or can't defer decisions, does this mean that I have a bad architecture? Shouldn't deferral be a conscious decision rather than a rule? All of this and the discussion that followed on Twitter intrigued me enough to stump up the cash for Clean Code Episode VII - Architecture, Use Cases, and High Level Design to see what Uncle Bob's perspective on architecture is.
Now that I've watched it, what do I think? Well I'm really pleased to see coverage of a couple of things. The first is describing functionality through delivery mechanism independent use cases, where there is no discussion of web pages, screens, buttons, technology, etc. And the second is the follow-up technique where you decompose a use case down into a number of different classes, each of which has a distinct responsibility. These are entities (e.g. business objects), controllers (also known as interactors, which represent the actual flow of control described in the use cases) and boundaries (which represent an interaction with an actor through the "delivery mechanism"). What these techniques basically do is allow you to describe and implement a use case in a way that is completely independent from the way that the use case will be delivered. In effect, you can bolt-on a number of different delivery mechanisms (e.g. a web or console app) without changing the actual core of "the application", which is ultimately the functionality that is being described by the use cases. As I said at the start of this post, these are the sort of techniques that many people don't know about, so I'm really pleased to see them being communicated here.
Coming back to Uncle Bob's tweets, I can now see his perspective. Adopting this approach does allow you to defer technology decisions and from the perspective of the use cases, this technology stuff is really just an "annoying detail".
I agree that the boundary-controller-entity technique is a great way to design software because the result is a really nice separation of concerns, which ultimately leads to something that can be easily unit tested and extended in the future. This is all about partitioning and isolation. OK, so I'm agreeing with Uncle Bob then? Hmm, not quite.
Throughout the video, Uncle Bob says the following (which I've paraphrased).
The architecture of an accounting app should scream accounting. A web and console version of the same accounting app should have identical architectures. The web delivery mechanism is a detail.
This is repeated a number of times through the video and it's based upon all of the good stuff that I've talked about above. However, I find myself in strong disagreement with the message as a whole. And here's why ... because the word "architecture" is being used. At face value this might sound pedantic but let's consider for a moment what Uncle Bob is actually talking about by redrawing the above diagram. Let's imagine that you're building an accounting app that you want to deliver over the web. Security is important so let's break it into multiple physical tiers. And we need to store all of the accounting data somewhere, so let's use a database. How does that annoying detail look now then...
That's right, the annoying detail is actually a large chunk of the system and, for me, architecture is about more than just what's contained within "the application". Structure is very important, but what about that tricky stuff like non-functional requirements, the actual delivery mechanism (technologies, frameworks, tools, APIs, etc), infrastructure services (e.g. logging, exception handling, configuration, etc), integration services (internal and external), satisfying any environmental constraints (e.g. operations and support), etc. For me, this is what "architecture" is all about and *that's* "the whole enchilada".
*I'll be presenting The Frustrated Architect at Skills Matter in London on the 15th of November and you can sign-up for free
If you don't have the video but want to get a feel for Uncle Bob's approach to architecture, take a look at the following links...
Simon is an independent consultant specializing in software architecture, and the author of Software Architecture for Developers (a developer-friendly guide to software architecture, technical leadership and the balance with agility). He’s also the creator of the C4 software architecture model and the founder of Structurizr, which is a collection of open source and commercial tooling to help software teams visualise, document and explore their software architecture.
You can find Simon on Twitter at @simonbrown ... see simonbrown.je for information about his speaking schedule, videos from past conferences and software architecture training.
I see things quite differently. Functional requirements change frequently. Non-functional requirements change infrequently. To be agile enough to respond to business changes, we can't let the architecture depend upon the domain. I lay out my argument in Keep functional and non-functional requirements separate.
"I define an architecture as that which solves the non-functional requirements, independent of the domain"
I wouldn't agree with that at all. Architecture is there to support the fulfilment of all requirements. Sure, if your functional requirements don't rely on any aspect of the tech then you might conclude that architecture and functional requirements are orthogonal, but that's just a particular type of project rather than a global rule. Of course, that tends to be the implied context in many of these discussions.
There are many environments where the delivery mechanism and architecture supporting that is very much a functional requirement, and where the architectural solution isn't swappable without functional effect.
For traditional DB-driven business systems, where you're given a business process to automate / assist users with, the underlying tech, architecture & delivery channel are not necessarily significant factors that anyone other than developers/ops sees, but that's far from the only kind of software that needs architecture.
Less pedantically, I distinctly remember a generation of applications that were thin "webifications" of desktop apps. These attempted to build a core application independent of the delivery mechanism, precisely as Bob suggests. All of them ended up hopelessly snarled in state management nightmares. They didn't scale, tended to lock up badly (due to database transactions), and were painful to use because they did too many page reloads.
At the root, I think the architecture of a system must balance the forces acting on it. Some of those forces emerge from the application domain, but many of them also emerge from the technology domain.
One of these delivery media responds best to command-oriented transactions that do not rely on keeping state in memory. The other works best when state is kept in memory for rich interaction. With so many fundamental forces in opposition, how could we expect the architecture to be the same?
Would one ever build web applications out of a bunch of .ocx'es in containers? Would one ever build a desktop application out of hyperlinked resources? Only as an exercise in perversity. This isn't because we have bad frameworks or leaky abstractions (though both exist). It's because the fundamental nature of desktop and web applications are different. Ignoring that entire aspect just because it doesn't come from the business domain is irresponsible.
I think the key irony underlying Bob's description of the accounting app is that he has, in fact, chosen an architecture. And he has done so quite early. The choice to structure the application in terms of controllers and entities is precisely an architectural choice.
I agree, and the choice to defer decisions (e.g. through the use of layers, adapters, etc) is also a significant decision. People often tell me that they use an ORM (e.g. Hibernate) in order to defer the database decision. Of course there's still a significant decision here ... but it's the ORM rather than the database. Significant decisions don't necessarily disappear, they just move elsewhere.
Ivar Jacobson may be best know for introducing use cases, but his book "Object-Oriented Software Engineering: A Use Case Driven Approach" actually included a complete, end-to-end modeling methodology. Use cases are used to capture the requirements of a system, and that does, in fact, drive the rest of the process. But the next step is to do "robustness analysis", which involves modeling system functionality meeting the use case requirements in terms of boundary, controller and entity objects (I believe "boundary" was initially "interface", but that later become confusing with other uses of the term).
What I think is key here is that the BCE pattern is used as part of analysis. It was not intended to provide an overall "architecture" for the system. Indeed, once one adds in consideration of those annoying implementation details, there is an immediate need to control complexity. This then leads to the internal division of a system into subsystems, which can be done in various ways, with various trade-offs. The whole rest of Jacobson's approach handles these sorts of considerations, allocating functionality down to subsystems and then, if necessary, recursively analyzing and designing those subsystems. After all, Jacobson's background was in the telecom industry in which the architecture of very large "systems of systems" is paramount!
Jacobson's own view on all this has, of course, evolved over the years (he is a big proponent of agile these days). But still, to me at least, an important lesson is that architecture has to do with mediating the stakeholder requirements for a system with the engineering realities of building the system. A good analysis technique like BCE can be a crucial link in doing this. But that is really just the beginning of the architect's work.
In any significant system, the devil is often in those annoying details!
That's actually where I first came across BCE ... during the analysis phase of a large RUP project. We had some coaching from Rational (as they were at the time) to help us take our use cases through to BCE style diagrams. It was then a "simple" transformation from controllers to J2EE session beans and from entities to J2EE entity beans. I seem to remember that Rational Rose even had a set of icons for the appropriate EJB stereotypes. Ah, good times. ;-)
But yes, the devil is indeed in those annoying details!
In the late 90's I worked on a modeling and transactional coding (just pre-EJB) framework that started with use case patterns tied to BCE analysis patterns on the front end. But there were then two more levels of architectural models before you had worked out all the implementation decisions! Not to mention the possibility of recursing into analysis of the subsystems of a large system.
In order to be able to flexibly handle changes in architectural and implementation decisions, you need to have an explicit record for what those decision were. To me, "agile" really requires being able to quickly and efficiently move back and forth between all these levels between requirements and code.
On the other hand, in my experience, models can be a great way for "a small number of highly experienced developers" (very much the kind of team I like to have!) to communicate amongst themselves and with stakeholders. But models are just a means for communication and documentation toward the end of developing a successful system. Certainly, if the models you are doing aren't helping you do you do your job better and faster, then don't do them!
But that's the topic of a whole other conversation...
Even when a lot of effort is focused on producing good code, too many programming details have to be mixed in to allow the code to communicate overall system design and architecture well enough. Good models can do that -- as long as they do stay in sync with the code. You can do that through process discipline or tooling support -- or even by making your models executable so they become the code.
But if you can't keep your models in sync and they end up communicating the wrong information, then that's worse than not having them.
Having participated in projects where the model went right down to an implementation level, whether that's for communication or direct code generation, I find it pretty unworkable at any non-trivial scale. Try getting version control set up for a large, very detailed model for instance where many people have to be involved, we used to partition our heavily to try to deal with that, but it was always far slower and more cumbersome for detail work than the code was. Better to just represent the high-level architecture, principles and idioms in the model (and it doesn't even need to be in a modelling tool for that, in fact something more free-form like a wiki was often more powerful) and let the implementors instantiate / specialise the patterns appropriately. Another reason a wiki was better than a formal model was that these implementors could feed back any specialisations or issues for peer review far more easily.