I attended a fantastic talk about big data visualisation at the YOW! 2014 conference in Sydney last month (slides), where Doug Talbott talked about how to understand and visualise large quantities of data. One of the things he mentioned was Shneiderman's mantra:
Overview first, zoom and filter, then details-on-demand
Leaving aside the thorny issue of how teams structure their software systems as code, one of the major problems I see teams having with software architecture is how to think about their systems. There are various ways to do this, including a number of view catalogs (e.g. logical view, design view, development view, etc) and I have my C4 model that focuses on the static structure of a software system. If you inherit an existing codebase and are asked to create a software architecture model though, where do you start? And how to people start understanding the model as quickly as possible so they can get on with their job?
Shneiderman's mantra fits really nicely with the C4 model because it's hierarchical.
My starting point for understanding any software system is to draw a system context diagram. This helps me to understand the scope of the system, who is using it and what the key system dependencies are. It's usually quick to draw and quick to understand.
Next I'll open up the system and draw a diagram showing the containers (web applications, mobile apps, standalone applications, databases, file systems, message buses, etc) that make up the system. This shows the overall shape of the software system, how responsibilities have been distributed and the key technology choices that have been made.
As developers, we often need more detail, so I'll then zoom into each (interesting) container in turn and show the "components" inside it. This is where I show how each application has been decomposed into components, services, modules, layers, etc, along with a brief note about key responsibilities and technology choices. If you're hand-drawing the diagrams, this part can get a little tedious, which is why I'm focussing on creating a software architecture model as code, and automating as much of this as possible.
Optionally, I might progress deeper into the hierarchy to show the classes* that make up a particular component, service, module, layer, etc. Ultimately though, this detail resides in the code and, as software developers, we can get that on demand.
Next time you're asked to create an architecture model, understand an existing system, present an system overview, do some software archaeology, etc, my advice is to keep Shneiderman's mantra in mind. Start at the top and work into the detail, creating a story that gets deeper into the detail as it progresses. The C4 model is a great way to do this and if you'd like an introduction to it (with example diagrams), you can take a look at Simple Sketches for Diagramming Your Software Architecture on the new Voxxed website.
* this assumes an OO language like Java or C#, for example
So, 2015 ... happy new year! 2014 was a busy year with workshops, conferences and consulting gigs in countries ranging from Iceland to Australia. I'd like to say a huge thank you to everybody who made 2014 so much fun.
One of the things that I spent a good chunk of time on during 2014 was the conflict between software architecture and code. I've written about this before, but you will have seen this in action if the code for your software system doesn't reflect the architecture diagrams you have on the wall. If you've not seen it, my closing keynote from the ABB DevDay conference in Kraków, Poland last September provides a good summary of this.
What I'm really interested in is how we can solve this problem. And that's really where my focus is going to be this year, by taking my C4 software architecture model and representing it as code. I already have some experimental code and tooling that you can find at structurizr.com, but I'm going to be enhancing and expanding this over the coming weeks and months. I want to get people thinking about how to appropriately structure their codebase, understanding that there are different strategies for modularity and adopting, what George Fairbanks calls, an architecturally-evident coding style. I also want to provide tooling that helps people create software architecture models and keep them up to date, ideally based upon the real code and with as much automation as possible. To give you an example, here's a post about diagramming Spring MVC webapps.
I'll be posting updates on the blog, but if you want to hear me talk about this, I'll be at the following conferences over the next few months.
As a final note, my Software Architecture for Developers ebook is only $10 until the end of this week.
All the best for 2015.
One of the core concepts in the Software Architecture for Developers course is that the Quality Attributes (non-functional requirements) need to be understood in order to provide foundations for a system's architecture. It's no good building a system that fulfills its user's functional requirements if these are delivered incorrectly. Consider the embedded software in a pacemaker. It may correctly analyse the rhythm of the patient's heart and conclude that a shock is required but if this is performed at the wrong time (possibly due to jitter in the response) then it may kill the patient.
Discovering that critical quality attributes are not being met can require a complete system redesign e.g. modifying an asynchronous system to be synchronous. Therefore the early identification of key Quality Attributes is important to drive your design and in the selection of tools and technologies.
However I've often had difficulties getting course attendees to identify specific attributes, as opposed to generic ones, for a case study. For example, most people will identify performance as important but struggle to go beyond this to consider trade-offs between, say, throughput and jitter.
Therefore, in the last couple of courses, I have expanded the identification of Quality Attributes to include a very brief (and lightweight) Quality Attribute Workshop for our case study.
The Software Engineering Institute has a description of how to perform a Quality Attribute Workshop which includes a full process and template set. While excellent (and a core part of their ATAM architecture evaluation process) this is too involved for a short training course. We therefore just performed the 'Identification of Architectural Drivers' and review steps.
Importantly the SEI also provides a very useful tool for the identification of Quality Attributes - a taxonomy. This is not just a list of attributes with a detailed description, it actually breaks down attributes from the generic to the specific. Take, for example, the following diagram for performance:
(Performance Taxonomy Extracted from Barbacci, Mario; Klein, Mark; Longstaff, Thomas; & Weinstock, Charles. Quality Attributes (CMU/SEI-95-TR-021 ). Software Engineering Institute, Carnegie Mellon University, 1995.)
The Quality Attributes are broken down under the 'Concerns' branch. For example, in the case study used the 'Response Window' is an important metric which needs analysis.
The 'Factors' branch, lists properties of the system that can impact the concerns. In our case study the 'Arrival Pattern' and 'Execution Time' are both important factors that need to be considered.
Lastly the 'Methods' branch lists tools/theories that can be used to analyse the concerns.
This diagram is useful for identification as it encourages the reader to consider all the aspects of the attribute in question and the measurable specifics for it. Without this taxonomy it is common to hear comments such as "it has to run quick enough" but with the taxonomy the analysis becomes much more detailed and useful.
However there is a danger, particularly with using a general, external taxonomy. My observation is that once provided with a taxonomy the participants tend to stick very closely to it and forget out the Quality Attributes NOT listed on it. For example the SEI list does not include Usability attributes or anything covering Internationalisation/Localisation. In response to this I'd suggest creating your own domain specific taxonomy. For example, if you work on retail websites you'll want more focus on usability and less on safety criticality.
I have found lightweight Quality Attribute Workshops to be a very effective way of identifying Quality Attributes in a short space of time, particularly if you use a Taxonomy to focus the participants. However you must be careful to not become blinkered by what it lists. Therefore I'd suggest you create your own taxonomy, specific to your domain.
I'm just back from the YOW! conference tour in Australia (which was amazing!) and I presented this as the closing slide for my Agility and the essence of software architecture talk, which was about how to create agile software systems in an agile way.
You will have probably noticed that software architecture sketches/diagrams form a central part of my lightweight approach to software architecture, and I thought this slide was a nice way to summarise the various things that diagrams and the C4 model enable, plus how this helps to do just enough up front design. The slides are available to view online/download and hopefully one of the videos will be available to watch after the holiday season.
There is currently a strong trend for microservice based architectures and frequent discussions comparing them to monoliths. There is much advice about breaking-up monoliths into microservices and also some amusing fights between proponents of the two paradigms - see the great Microservices vs Monolithic Melee. The term 'Monolith' is increasingly being used as a generic insult in the same way that 'Legacy' is!
However, I believe that there is a great deal of misunderstanding about exactly what a 'Monolith' is and those discussing it are often talking about completely different things.
A monolith can be considered an architectural style or a software development pattern (or anti-pattern if you view it negatively). Styles and patterns usually fit into different Viewtypes (a viewtype is a set, or category, of views that can be easily reconciled with each other [Clements et al., 2010]) and some basic viewtypes we can discuss are:
A monolith could refer to any of the basic viewtypes above.
If you have a module monolith then all of the code for a system is in a single codebase that is compiled together and produces a single artifact. The code may still be well structured (classes and packages that are coherent and decoupled at a source level rather than a big-ball-of-mud) but it is not split into separate modules for compilation. Conversely a non-monolithic module design may have code split into multiple modules or libraries that can be compiled separately, stored in repositories and referenced when required. There are advantages and disadvantages to both but this tells you very little about how the code is used - it is primarily done for development management.
For an allocation monolith, all of the code is shipped/deployed at the same time. In other words once the compiled code is 'ready for release' then a single version is shipped to all nodes. All running components have the same version of the software running at any point in time. This is independent of whether the module structure is a monolith. You may have compiled the entire codebase at once before deployment OR you may have created a set of deployment artifacts from multiple sources and versions. Either way this version for the system is deployed everywhere at once (often by stopping the entire system, rolling out the software and then restarting).
A non-monolithic allocation would involve deploying different versions to individual nodes at different times. This is again independent of the module structure as different versions of a module monolith could be deployed individually.
A runtime monolith will have a single application or process performing the work for the system (although the system may have multiple, external dependencies). Many systems have traditionally been written like this (especially line-of-business systems such as Payroll, Accounts Payable, CMS etc).
Whether the runtime is a monolith is independent of whether the system code is a module monolith or not. A runtime monolith often implies an allocation monolith if there is only one main node/component to be deployed (although this is not the case if a new version of software is rolled out across regions, with separate users, over a period of time).
Note that my examples above are slightly forced for the viewtypes and it won't be as hard-and-fast in the real world.
Be very carefully when arguing about 'Microservices vs Monoliths'. A direct comparison is only possible when discussing the Runtime viewtype and properties. You should also not assume that moving away from a Module or Allocation monolith will magically enable a Microservice architecture (although it will probably help). If you are moving to a Microservice architecture then I'd advise you to consider all these viewtypes and align your boundaries across them i.e. don't just code, build and distribute a monolith that exposes subsets of itself on different nodes.
For my final trip of the year, I'm heading to Australia at the end of this month for the YOW! 2014 series of conferences. I'll be presenting Agility and the essence of software architecture in Melbourne, Brisbane and Sydney. Plus I'll be running my Simple sketches for diagramming your software architecture workshop in Melbourne and Sydney. I can't wait; see you there!
If you’re working in an agile software development team at the moment, take a look around at your environment. Whether it’s physical or virtual, there’s likely to be a story wall or Kanban board visualising the work yet to be started, in progress and done. Visualising your software development process is a fantastic way to introduce transparency because anybody can see, at a glance, a high-level snapshot of the current progress.
As an industry, we’ve become adept at visualising our software development process over the past few years – however, it seems we’ve forgotten how to visualise the actual software that we’re building. I’m not just referring to post-project documentation. This also includes communication during the software development process. Agile approaches talk about moving fast, and this requires good communication, but it’s surprising that many teams struggle to effectively communicate the design of their software.
A lightweight approach to software architecture is pivotal to successfully delivering software, and it can complement agile approaches rather than compete against them. After all, a good architecture enables agility and this doesn't happen by magic. "Software Architecture for Developers" is a practical and pragmatic guide to lightweight software architecture. You'll learn:
- The essence of software architecture.
- Why the software architecture role should include coding, coaching and collaboration.
- The things that you *really* need to think about before coding.
- How to visualise your software architecture using simple sketches.
- A lightweight approach to documenting your software.
- Why there is *no* conflict between agile and architecture.
- What "just enough" up front design means.
- How to identify risks with risk-storming.
I'm excited to be working with Parleys on this and I think they have an amazing platform for delivering online training. If you're thinking about creating an online course, I recommend taking a look at Parleys. The tooling behind the scenes used to put the course together is incredible. Many thanks to Carlo Waelens and the Parleys team for everything over the past few months - I hope this is the start of something big for you.
I know there's demand for a hard-copy of the regular version, so I'll be doing this early next year, probably as a print-on-demand book from somewhere like Lulu, CreateSpace, etc.