One of my talks at the recent DevWeek conference was about the pitfalls software architects face and I covered some of the problems associated with technology selection. Probably *the* biggest problem is vendor marketing and hype, with many project teams simply taking this at face value. Sometimes a piece of technology will do exactly what it says on the tin, but sometimes it won't. There are truly no silver bullets and every technology, large or small, has trade-offs. You've probably seen this yourself at some point. Vendors (open source and commercial) promising features that they haven't yet implemented through to bold claims about performance or scalability. Depending on your project context, these promises can often make or break your project.
One of the analogies that I made during the session was about the fuel consumption figures quoted by car manufacturers in their glossy brochures. Let's imagine that you need to travel from one side of the country to the other, work out the mileage and then buy or rent a car based upon the fuel consumption figures quoted in a brochure. The quoted consumption figures are usually based on some optimum conditions but real world figures will vary according to the way that you drive, the speed you drive, the ambient temperature, the gradient, the road surface and so on. Depending on all of this, you may or may not achieve your goal.
When we undertake a technology selection exercise, we'll typically evaluate candidates against a set of criteria and choose the one that we think best suits our needs before plugging it in to our projects. Not testing the technology before adoption is the same as driving a car across the country - you're relying on somebody else's claims and it might not go as expected. Literally, your mileage may vary!
Of course, the key difference is that you get a fuel gauge in a car that provides you with constant feedback of how much fuel remains in the tank. In addition, newer cars have onboard computers that can provide you with real-time consumption figures and estimate the mileage remaining. This is all information and it provides a way to monitor what is happening so that you can adjust (or fill up!) as necessary. Laptop batteries are the same. The manufacturers quote maximum battery life figures and while you might not get that in real world usage, you do get to see how much battery life is remaining.
With this in mind, it's worth thinking about why we don't usually add fuel gauges to our own software systems. These systems are usually composed of many complex technologies, each of which makes its own claims and has its own trade-offs. Yet we often deploy and run our systems as a black box. Often this will work but sometimes it won't. And worse still, without a fuel gauge you have no idea when your system will grind to a crawl or stop working completely.
Adding a monitoring capability is fairly easy to do and can give you important insight into the health of your software. For example, it might allow you to monitor how many database connections are being used, or how many messages are waiting to be processed, or how many worker threads are busy servicing user requests. Here are some thoughts on how to cater for monitoring in your architecture, and they're particularly relevant if you're building Java applications.
As with cars and laptops, there are benefits to be had by adding some simple feedback devices to our software systems. After all, wouldn't it be great if you could understand the health of your system and proactively deal with problems before they become major issues?
Simon is an independent consultant specializing in software architecture, and the author of Software Architecture for Developers (a developer-friendly guide to software architecture, technical leadership and the balance with agility). He’s also the creator of the C4 software architecture model and the founder of Structurizr, which is a collection of open source and commercial tooling to help software teams visualise, document and explore their software architecture.