bookmark_borderA Software Quality Model (Part II) – Translating customer language into metrics, scoring quality

In my previous post I have explained the context of my thesis, and the various software quality models that are evaluated.

For my thesis I have extended the Software Quality Model of Willmer. Although it is not an exact implementation of the model, it is inspired by it. Also, the influence of the customer is processed into this model. The goal of this model is to translate customer desires into metrics, in order to calculate the total quality.

When talking about software quality, it is either very concrete (I want a red car!) or very abstract (it has to be reliable!). Customers tend to tell their ‘experiences of quality’ in sentences. This is the first step of the model. Try to get a few (eight at most) of these sentences. They must be distinctive. (Don’t have four sentences preach about security…)

Translating these sentences into concrete, measurable ‘things’ for developers, is another story. But before doing that , ask your customer what the relative importance is of all these sentences you just have written down. Just imagine that there is this situation where you have to go live. But , there is an issue that needs to be tackled, all blocking, one for each aspect (sentence). If you could pick one, which would you pick? Would you tackle the first sentence, the second? etc?

Of course, your customer will would tell  in his own domain language what the most important thing is. Try to map that (and confirm) with these sentences. Score them and try to get an ‘order of importance’. After you have done that, you have your first important goal reached: You know the relative importance of each quality sentence (aspect from now on).

So, what now? The next step is to map these aspects/sentences to Software Quality Attributes (also known as non-functional requirements). You either need a Software Engineer to do that, or even better, try to do it with your team. Before mapping these, try to make a selection of Quality Attributes first that are most relevant to you. Ie, try to use tree of Boehm or ISO9160 as reference. Within my theses I have used 9 Quality Attributes, some Quality Attributes are ‘sub-quality attributes’ of others. Example of Quality Attributes: Understandability, Reliability, Security, Availability, Complexity, etc.

The result of this mapping is that you get for each aspect several quality attributes. Not all quality attributes are applicable to an aspect. Try to figure out how much quality attributes are applicable to an aspect. Do this by asking teammembers and for each member that selects a quality attribute, score it. This way you can calculate relatively how your team thinks the mapping should be. This is important, because the eventual result of your measurement (see below) should be the product of the customer and the team that work on that product.

So, you have a few aspects, and each aspect has a few quality attributes. All that is left is to map metrics to quality attributes. Mapping this is fairly easy, there are quite a bunch of metrics out there. Each metric tries to measure (a piece of) a quality attribute. Some are easy, like complexity (quality attribute) can be measured by (although it is not limited to) the Cyclomatic Complexity metric by McCabe.

So basically you end up with this:

Aspect(n..8) –> Quality Attribute(n) –> Metrics(n)

Where:

  • the total quality of the system, is the combination of all aspects (all aspects relativity make 100%)
  • you should keep eight aspects (believe me, more will only make it harder to distinguish and make decisions)
  • you should attach quality attributes to each aspect, and determine their relative applicability to this aspect
  • you should attach metrics to quality attributes

So in the end, how do you score quality? Is it possible with this model? Yes, certainly it is.

Once you have found metrics, and attached them to quality attributes. You should formulate ‘scoring rules’. This means, you need to write down how you will interpret results of a metric and translate that onto a scale of 1 to 10. A scoring rule could be:

“Lines of Code (LOC) may be 1000 for a file max. Every 100 lines more substracts one point. (Giving 10 points with 1000 or lower and with 2000 lines a 1)”

This means, a LOC of 1000 will score a 10. A LOC of 1500 scores a 5, a LOC of 2000 or higher scores a 1.

Do this for all metrics, and eventually you will be able to calculate the total quality of the system.

In order to make this more concrete, here an example of such a calculation:

Total Quality Score = sum of scores of each aspect

Aspect score = (Sum of all relative scores of all applicable attributes) * relative importance

Attribute score = (Sum of all relative scores of all applicable metrics)

Example (For the sake of this example, attributes are scored already)

Aspect #1

Is for 30% important (relative to Aspect #2)

Attributes:

A -> for 40% applicable

B -> for 60% applicable

Aspect #2

Is for 70% important (relative to Aspect #1)

Attributes:

C -> for 70% applicable

B -> for 30% applicable

Scoring:

Attribute A scores 7 (out of 10)

Attribute B scores 5 (out of 10)

Attribute C scores 8 (out of 10)

Total quality calculation:

Aspect #1

A = 7 * 40% = 2,8

B = 5 * 60% = 3,0

Absolute score is 2,8 + 3,0 = 5,8

Aspect #2

C = 8 * 70% = 5,6

B = 5 * 30% = 1,5

Absolute score is 5,6 + 1,5 = 7,1

Total quality is:

Aspect #1 –> 5,8 * 30% (importance) = 1,74

Aspect #2 –> 7,1 * 70% (importance) = 4,97

Total quality score is 1,74 + 6,71 ==> 6,71

6,71 on a score of 1 to 10 (1 being bad, 10 being best).