Counting bugs and creating metrics out of the numbers is a widely (mis-)used practice in software development. I understand where Michael Bolton is coming from and agree to a large degree. Apart from the obvious problems it also fosters bad behaviour as people try to game the numbers reducing their value. Of course there’s a however looming…
I’ll raise my hand and profess myself guilty of counting bugs in the past and creating metrics and diagrams out of it. For a variety of reasons explained below it looked like it was helpful.
I made the mistake of thinking of the bug count as information which it isn’t. It’s data. It becomes information with context and someone attaching meaning to it, right or wrong. I’ll try and make the point that the counts are meaningless but that the context information is actually the important part.
Here’s one such example where I tried to read meaning into the data. In a non-agile development project (with variations for company and type of project) I expect a relatively steep rise in the bug count, a couple of waves when new builds are coming in, bugs are fixed and new ones found with a tapering off towards the end. Note: In agile projects not all bugs may actually get reported but are fixed on the spot so it becomes even more meaningless.
Looking at the graph I’m wondering what I’m missing (yes, context). Are these the total number of bugs? Or just the high priority ones? The numbers are going up and down in a certain way, is that expected? For example was there a new big feature in the build around 11/14/10? If not, did we get more testers on the project to explain the sudden rise? What about the drop?
In short the bug count doesn’t answer questions, it triggers questions which most likely could have been asked without it or are meaningless if we want to actually know about the quality of the product. A talk with a tester is more meaningful than looking at bug count data.
In a past project I had a very sudden bug count rise where the answer was that I assigned a new contractor to the project. She found plenty of problems within a very short time span showing her experience which is why I hired her. After a review of the bug reports and subsequent discussion with both testers it became clear that pairing with the new contractor was a good idea to ensure that the knowledge rubs off. I used the metric not to inform me about the quality of the product but as a trigger to find out something, that I could have had with a 5 minute chat with the new contractor.
That’s where I see one of the problems with looking at bug count data, replacement. I could have spoken to the tester immediately instead of wasting it on looking at a graph first.
Another example – the drop at the end seems off to me. Again from experience something similar happened when we lost the connection to the database – a network error. Since I was the first one in we could get the problem fixed before the rest of the team arrived by looking at the bug count. Note that we later added Splunk for proper monitoring not only of the operational machines but for the test environment as well rather than relying on this crutch. This problems was caught as a lucky accident, not by design.
The important part here is not the bug count but asking the right questions and implementing the proper tools.
Counting bugs usually has a very low initial cost as it’s provided with most bug tracking tools. This is one of the reasons it’s so widespread in addition to how easy it is to read meaning into the numbers.
At the time I made it very clear to the test team that they are not judged based on the number of bugs they find or that this would even find a way into their reviews which is the case in some places. If everyone understands what data we have in front of us and more importantly, what it doesn’t tell us it can be a useful tool in the context of your project. If we remind us of our biases we are less likely to interpret something into the data that is not there.
Will I continue looking at the bug count? Probably but the emphasis is now on finding the real and context information.
To make sure everyone is paying attention, the graph that you see above is not a bug count graph but an edited user count graph. So just because it looks like what you’re expecting doesn’t mean that’s what it is. In short, the numbers are meaningless. Context information is what we really should be looking for.
Happy hunting, not for bugs but for the right questions to ask.