Tag Archives: quality

My experience with (un)certainty about estimates in relation to technical debt

Not too long ago, Martin Fowler pointed out a nice blog post by Jay Fields. Jay Fields refers to a nice talk he had about accidental complexity and essential complexity and how this has impact on your estimates. He found that not all developers consider the accidental complexity and therefor have lower estimates.

I found this a very interesting thought. It got me thinking how I estimate and how far I’m off. I found that, especially with larger solutions, I’m most of the time under estimating. Even with more complex things, and adding some ‘unforseen complexity percentage’. I’m still under estimating most of the time. However, I’ve also had better experience on other projects. Especially the latest project I’m working on the estimates of fixes and rework are not as much off. How is this possible?

I find myself labeling this phenomenom as “lack of overview”. If you read the definition of Accidental Complexity, it is described as “…accidental complexity is caused by the approach chosen to solve the problem.”. I believe this ‘approach chosen to solve the problem’ is the design of the code. This is different comparing to Essential Complexity, which I belief is much like Cyclomatic Complexity.

I made mistakes with my estimates, even when I knew the code well. Often it was due a dependency that ‘got in the way’, or worse, the lack of dependencies. All functionality was in one class! Adding similar behaviour required me to duplicate code. I consider this a bad practice, so I had to extract code from the other class. I was untangling the code. Whenever I had to untangle that code (ie, seperate concerns), I had a hard time doing so: Because untangling a tangled (tightly coupled) piece of code forced me to untangle other pieces of code as well. I had to stop somewhere. Like someone once said to me: The devil is in the details. (this is one of the reasons I encourage my co-developers to talk to interfaces, and not implementations).

But why are estimates off anyway? Is it because (lack) of experience with the code? Even with code I’ve worked with for years I still made bad estimates. And I could not find a way to get them better. The newer project went way better for me to estimate. I already knew why it was going better:

My mental model of the code matched better to the actual code. Was it because I worked on it lately and knew how it worked exactly in detail? No, not at all! The Technical Debt is much lower on this project. One of the principles that played a huge rule was the Single Responsibility Principle (PDF). When I had to make a change, it was often in one place. When I had to add code, I could easily move code out of the class and seperate responsibilities. The code was less tangled, tightly coupled.

This phenomenom of untangling code, seperating concerns and having a hard time maintaining code is clearly a sign of repaying serious interests of technical debt. And I clearly see that as a result of a ‘choosen approach to solve the problem’.

Therefor I believe the technical debt is linked to essential and accidential complexity and even more (what about readability?). Accidential Complexity is something that is very hard to grasp. I think this ‘uncertainty’ needs to be clarified and be added to each initial estimate in order to get a ‘more realistic’ estimate.

I would recommend to estimate code while looking at the code itself, rather than use just your mental model of the code.
Finally, repaying the interest of the technical debt should be prioritized in order to be able to maintain the system and to prevent to get a mad customer getting ever less features using ever increasing time to make them.

A Software Quality Model (Part II) – Translating customer language into metrics, scoring quality

In my previous post I have explained the context of my thesis, and the various software quality models that are evaluated.

For my thesis I have extended the Software Quality Model of Willmer. Although it is not an exact implementation of the model, it is inspired by it. Also, the influence of the customer is processed into this model. The goal of this model is to translate customer desires into metrics, in order to calculate the total quality.

When talking about software quality, it is either very concrete (I want a red car!) or very abstract (it has to be reliable!). Customers tend to tell their ‘experiences of quality’ in sentences. This is the first step of the model. Try to get a few (eight at most) of these sentences. They must be distinctive. (Don’t have four sentences preach about security…)

Translating these sentences into concrete, measurable ‘things’ for developers, is another story. But before doing that , ask your customer what the relative importance is of all these sentences you just have written down. Just imagine that there is this situation where you have to go live. But , there is an issue that needs to be tackled, all blocking, one for each aspect (sentence). If you could pick one, which would you pick? Would you tackle the first sentence, the second? etc?

Of course, your customer will would tell  in his own domain language what the most important thing is. Try to map that (and confirm) with these sentences. Score them and try to get an ‘order of importance’. After you have done that, you have your first important goal reached: You know the relative importance of each quality sentence (aspect from now on).

So, what now? The next step is to map these aspects/sentences to Software Quality Attributes (also known as non-functional requirements). You either need a Software Engineer to do that, or even better, try to do it with your team. Before mapping these, try to make a selection of Quality Attributes first that are most relevant to you. Ie, try to use tree of Boehm or ISO9160 as reference. Within my theses I have used 9 Quality Attributes, some Quality Attributes are ‘sub-quality attributes’ of others. Example of Quality Attributes: Understandability, Reliability, Security, Availability, Complexity, etc.

The result of this mapping is that you get for each aspect several quality attributes. Not all quality attributes are applicable to an aspect. Try to figure out how much quality attributes are applicable to an aspect. Do this by asking teammembers and for each member that selects a quality attribute, score it. This way you can calculate relatively how your team thinks the mapping should be. This is important, because the eventual result of your measurement (see below) should be the product of the customer and the team that work on that product.

So, you have a few aspects, and each aspect has a few quality attributes. All that is left is to map metrics to quality attributes. Mapping this is fairly easy, there are quite a bunch of metrics out there. Each metric tries to measure (a piece of) a quality attribute. Some are easy, like complexity (quality attribute) can be measured by (although it is not limited to) the Cyclomatic Complexity metric by McCabe.

So basically you end up with this:

Aspect(n..8) –> Quality Attribute(n) –> Metrics(n)

Where:

  • the total quality of the system, is the combination of all aspects (all aspects relativity make 100%)
  • you should keep eight aspects (believe me, more will only make it harder to distinguish and make decisions)
  • you should attach quality attributes to each aspect, and determine their relative applicability to this aspect
  • you should attach metrics to quality attributes

So in the end, how do you score quality? Is it possible with this model? Yes, certainly it is.

Once you have found metrics, and attached them to quality attributes. You should formulate ‘scoring rules’. This means, you need to write down how you will interpret results of a metric and translate that onto a scale of 1 to 10. A scoring rule could be:

“Lines of Code (LOC) may be 1000 for a file max. Every 100 lines more substracts one point. (Giving 10 points with 1000 or lower and with 2000 lines a 1)”

This means, a LOC of 1000 will score a 10. A LOC of 1500 scores a 5, a LOC of 2000 or higher scores a 1.

Do this for all metrics, and eventually you will be able to calculate the total quality of the system.

In order to make this more concrete, here an example of such a calculation:

Total Quality Score = sum of scores of each aspect

Aspect score = (Sum of all relative scores of all applicable attributes) * relative importance

Attribute score = (Sum of all relative scores of all applicable metrics)

Example (For the sake of this example, attributes are scored already)

Aspect #1

Is for 30% important (relative to Aspect #2)

Attributes:

A -> for 40% applicable

B -> for 60% applicable

Aspect #2

Is for 70% important (relative to Aspect #1)

Attributes:

C -> for 70% applicable

B -> for 30% applicable

Scoring:

Attribute A scores 7 (out of 10)

Attribute B scores 5 (out of 10)

Attribute C scores 8 (out of 10)

Total quality calculation:

Aspect #1

A = 7 * 40% = 2,8

B = 5 * 60% = 3,0

Absolute score is 2,8 + 3,0 = 5,8

Aspect #2

C = 8 * 70% = 5,6

B = 5 * 30% = 1,5

Absolute score is 5,6 + 1,5 = 7,1

Total quality is:

Aspect #1 –> 5,8 * 30% (importance) = 1,74

Aspect #2 –> 7,1 * 70% (importance) = 4,97

Total quality score is 1,74 + 6,71 ==> 6,71

6,71 on a score of 1 to 10 (1 being bad, 10 being best).