Lightweight Quality Attribute Workshop

How do you quickly gather Quality Attributes from stakeholders?

One of the core concepts in the Software Architecture for Developers course is that the Quality Attributes (non-functional requirements) need to be understood in order to provide foundations for a system's architecture. It's no good building a system that fulfills its user's functional requirements if these are delivered incorrectly. Consider the embedded software in a pacemaker. It may correctly analyse the rhythm of the patient's heart and conclude that a shock is required but if this is performed at the wrong time (possibly due to jitter in the response) then it may kill the patient.

Discovering that critical quality attributes are not being met can require a complete system redesign e.g. modifying an asynchronous system to be synchronous. Therefore the early identification of key Quality Attributes is important to drive your design and in the selection of tools and technologies.

However I've often had difficulties getting course attendees to identify specific attributes, as opposed to generic ones, for a case study. For example, most people will identify performance as important but struggle to go beyond this to consider trade-offs between, say, throughput and jitter.

Therefore, in the last couple of courses, I have expanded the identification of Quality Attributes to include a very brief (and lightweight) Quality Attribute Workshop for our case study.

The Software Engineering Institute has a description of how to perform a Quality Attribute Workshop which includes a full process and template set. While excellent (and a core part of their ATAM architecture evaluation process) this is too involved for a short training course. We therefore just performed the 'Identification of Architectural Drivers' and review steps.

Importantly the SEI also provides a very useful tool for the identification of Quality Attributes - a taxonomy. This is not just a list of attributes with a detailed description, it actually breaks down attributes from the generic to the specific. Take, for example, the following diagram for performance:


Performance Taxonomy (Performance Taxonomy Extracted from Barbacci, Mario; Klein, Mark; Longstaff, Thomas; & Weinstock, Charles. Quality Attributes (CMU/SEI-95-TR-021 ). Software Engineering Institute, Carnegie Mellon University, 1995.)


The Quality Attributes are broken down under the 'Concerns' branch. For example, in the case study used the 'Response Window' is an important metric which needs analysis.

The 'Factors' branch, lists properties of the system that can impact the concerns. In our case study the 'Arrival Pattern' and 'Execution Time' are both important factors that need to be considered.

Lastly the 'Methods' branch lists tools/theories that can be used to analyse the concerns.

This diagram is useful for identification as it encourages the reader to consider all the aspects of the attribute in question and the measurable specifics for it. Without this taxonomy it is common to hear comments such as "it has to run quick enough" but with the taxonomy the analysis becomes much more detailed and useful.

However there is a danger, particularly with using a general, external taxonomy. My observation is that once provided with a taxonomy the participants tend to stick very closely to it and forget out the Quality Attributes NOT listed on it. For example the SEI list does not include Usability attributes or anything covering Internationalisation/Localisation. In response to this I'd suggest creating your own domain specific taxonomy. For example, if you work on retail websites you'll want more focus on usability and less on safety criticality.

Conclusion

I have found lightweight Quality Attribute Workshops to be a very effective way of identifying Quality Attributes in a short space of time, particularly if you use a Taxonomy to focus the participants. However you must be careful to not become blinkered by what it lists. Therefore I'd suggest you create your own taxonomy, specific to your domain.

About the author

Robert Annett Robert works in financial services and has spent many years creating and maintaining trading systems. He knows far more about low latency data systems and garbage collection than is good for anyone. He likes to think of himself as a pragmatist who loves technology but uses what's appropriate rather than what's cool.

When not pouring over data connections or tormenting interviewees with circular reference questions, Robert can be found locked in his shed with an impressive collection of woodworking tools.

E-mail : robert.annett at codingthearchitecture.com


You Should Check out the Mini-QAW

Great to see that you're advocating for the use of taxonomies and I agree whole-heatedly in the caveats of using a generic one. Generic taxonomies are great for getting started but really should be tailored over time for the types of systems you typically build. For example, my team built out a taxonomy specific to search-based applications and a questionnaire that goes with it to aide in the QAW. A solid quality attributes taxonomy and questionnaire is a significant part of what allowed us to "trim the fat" from the traditional QAW when designing the Mini-QAW. For your reference, here are the slides and video from the SATURN 2014 talk describing the Mini-QAW. I'm sure your readers will find the additional details on facilitating such a workshop useful. The "System Properties Web" activity is a great concrete example of how to apply with a taxonomy.
  • Facilitating the Mini-Quality Attributes Workshop (slides) -- http://resources.sei.cmu.edu/library/asset-view.cfm?assetID=89553
  • Video of the talk given at SATURN 2014 -- https://www.youtube.com/watch?v=vPGyPRFx0mk&list=PLSNlEg26NNpy1RjhlISNMRNO1gypYaXHo&index=18
What are your thoughts on trying to build a GitHub repo or something similar to help collect and curate quality attribute taxonomies tailored to specific domains or types of systems? I'd be happy to discuss any of these ideas further with anyone interested.

You Should Check out the Mini-QAW

Thanks! I hadn't seen your Saturn video before. I've just watched it and it definitely mirrors my own thoughts. I got participants to capture the QAs as a tree matching the taxonomy but I like your use of a web...

What I found most interesting, was the number of (knowledgeable, technical) participants who weren't aware that certain quality attributes even existed. Using the taxonomy not only forced them to consider unfamiliar quality attributes, but also stopped them over-focusing on well-known ones as they were aware of how many they needed to discuss in a short period of time! This activity *has* to be time-boxed.

I would be interested in a taxonomy repository but we'd need domain experts or permission to transpose from elsewhere.


Add a comment Send a TrackBack