I test, therefore I am

A couple of days ago I had a thought going “While testing, I’m not just testing the software under test. What else am I testing?”

card_puncher_-_nara_-_513295-17a33e71f1a0f5d83ce3fd9b064d1f0b

It quickly became clear to me that it’s not only things within the box but also thoughts and ideas that we test. So I started the following list, the reason behind it being that if I’m aware of something I can focus on it and test it. I may think of additional heuristics that may help me test this particular part and find risks, bugs, issues and information.

What do we test? 

  • The application
  • The operating system, anti-virus and other unrelated software running in the background
  • (Test) environments and infrastructure
  • Ourselves, our
    • Models, thoughts and ideas
    • Capabilities
    • Skills
    • Knowledge
    • Oracles
    • Biases
    • Feelings
    • Assumptions (thanks to Thanh Huynh)
  • Needs – Testers vs Customers vs Business (thanks to Dan Billing)
  • Our processes
  • Our document structure, content and formatting
  • Our peers, test partner or debrief partner
  • Relationships with
    • Software and hardware
    • Project and non-project colleagues
    • Customers

If you can think of anything else, please add a comment and I’ll add it to the list!

 

 

Can we actually count bugs?

A recent quote on Twitter got me thinking:You can't measure quality but you can discuss it

I thought about the different ways we measure quality and one of the most infamous ones is to count bugs. I then though about if I actually really understand what it means to count. As a kid I used my fingers to help, and I thought I knew what I was doing. I’m a wee bit further than that now, meaning I’m not so sure anymore.

kid counting fingers

So I looked up the the definition what counting really is, from a mathematical point of view:

Wiki: Counting is the action of finding the number of elements of a finite set of objects.

OK, the element part is clear, we know what a bug is, right? Here’s one definition, again from the Wiki:

A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program’s source code or its design, or in frameworks and operating systems used by such programs, and a few are caused by compilers producing incorrect code.

There are different definitions around, for example from IEEE 829, ISTQB, Softwaretestingfundamentals, RST, etc, the list is nearly endless. Let’s assume, and that assumption is a big one because it hasn’t happened so far, we all agree on the same definition of what a bug is. OK, we may be able to do that within one company at least if we don’t ask everyone and squint a bit while doing it.

So the question becomes, can we define a set properly? If we look at the definition above it’s actually pretty vague, for example who defines what incorrect or unexpected is? What’s unexpected to one person may not be to the next. With that definition we can’t actually define the element or a proper set as we’re dealing with relationships here and we’d need to define all unique elements of those relationships in order to define the set which is impossible.

To make matter worse, the set of objects has to be finite. So while we could, in theory, count all the grains of sand on a defined stretch of a beach we can’t count bugs as they rely on models, relationships, behaviours or ideas – which are infinite.

In short, since we can’t properly define the set which also has to be finite we can’t call it counting, at least not in a mathematical sense which is what most people do. We may point to the screen and count the number of bug reports in a bug tracker but that is a completely different kettle of fish.

Mistaking these two can and is being done for political reasons but if you’re serious about software testing we have to make sure people understand what it is exactly we’re reporting.

In other words, we can’t measure the number of bugs in a system but we can discuss them.

W5H3-A model for communication in software development

I first gave a talk about my model for communication at DEWT6 (Dutch Exploratory Workshop on Testing) in January 2016.

At the core is a very extensive mindmap which is intended to model all the different aspects of communication before it takes place in the context of software development, especially software testing. My examples are geared towards this area but it works for completely different areas as well.

The idea is for an upcoming conversation, meeting or other communication event to go through this model in advance and take notes (if needed) in order to improve communication.

People at DEWT suggested that they can see it being useful for retrospectives as well – how you use it is up to you. Adding to it may feel right, removing some parts to make it easier to work with – work with the mindmap that is right for you. After the talk Zeger van Heese (@testsidestory), suggested adding “feedback” to the mindmap and how to ensure that what was said wasn’t understood in a different way. That makes a lot of sense to me so that’s in now, thanks for the contribution!

W5H3 mindmap

Click on the map or here to go the Biggerplate site if you’d like to download the map. Without it understanding the model will not be easy if not impossible.

It may be useful to start at the top right (Who) and then walk your way through it in clockwise direction. That way it’s easy to figure out Who one wants to communicate with, What should be communicated and so on.

Adverse factors is an area that lists a couple of problems why communication may fail and what to look out for, something to check your approach against once it’s about finished.

Here’s an example how you may want to use it:

“I want to talk to a Software Tester about the testing they’ve just done.”

  • Who is clear in this example, the tester.
  • What I want to talk about needs to be clearer in my head. It could be test coverage, approach they took, potential problems discovered, …
  • Why do I want to talk to them about it? Do I feel unsure that they have the skills to cover all aspects? Maybe it’s an area that I don’t know well and want to learn something about. Maybe there are several aspects as to why I want to have this discussion. To me this is one of the most important but often overlooked areas. The 5 Whys technique may be used here as well to get to root reason for communication.
  • When do we have a discussion, right now, in an hours time? Maybe schedule in a weekly or daily meeting?
  • Where do we want to have this conversation? In my office? Do I sit with the Tester in their open space office? Do we go to a meeting room or even go offsite? How does the where impact communication in this case?
  • How much do we communicate? If the Tester is new in the team there may be a need to communicate a lot more tacit knowledge compared to long time employees in the company. Do I want to get a high level overview or go into specific details?
  • How many people are present for this conversation? Is it a 1:1? Would it make sense to ask other testers to join us and have a group discussion?
  • How is the biggest part and needs a lot of attention. The how is where all the information from the previous questions is distilled into an approach. The participant(s) may be insecure but experienced, boisterous but newbies, feel more at ease in a structured environment or not, some people can be criticised easily but feel not taken seriously if that doesn’t happen and so on. Different tools can be applied and combined in order to reach the goal of this conversation. Feedback from the communication partner determines how the approach may need to be changed during communication.

Check the adverse factors node for anything that may have a negative impact that hasn’t been covered by the rest of the model.

The “NOT” heuristic

I used the NOT heuristic to both get more ideas and to validate the thoughts that I have.

I’d ask not only Who do I want to communicate with, but also who NOT? Why do I want to communicate with someon or why NOT? This way it’s less likely that certain aspects are missed and it’s easy and fast to do.

 

A bug count in context

Counting bugs and creating metrics out of the numbers is a widely (mis-)used practice in software development. I understand where Michael Bolton is coming from and agree to a large degree. Apart from the obvious problems it also fosters bad behaviour as people try to game the numbers reducing their value. Of course there’s a however looming…

I’ll raise my hand and profess myself guilty of counting bugs in the past and creating metrics and diagrams out of it. For a variety of reasons explained below it looked like it was helpful.

I made the mistake of thinking of the bug count as information which it isn’t. It’s data. It becomes information with context and someone attaching meaning to it, right or wrong. I’ll try and make the point that the counts are meaningless but that the context information is actually the important part.

Here’s one such example where I tried to read meaning into the data. In a non-agile development project (with variations for company and type of project) I expect a relatively steep rise in the bug count, a couple of waves when new builds are coming in, bugs are fixed and new ones found with a tapering off towards the end. Note: In agile projects not all bugs may actually get reported but are fixed on the spot so it becomes even more meaningless.

So the picture may look something like this:Graph

Looking at the graph I’m wondering what I’m missing (yes, context). Are these the total number of bugs? Or just the high priority ones? The numbers are going up and down in a certain way, is that expected? For example was there a new big feature in the build around 11/14/10? If not, did we get more testers on the project to explain the sudden rise? What about the drop?

In short the bug count doesn’t answer questions, it triggers questions which most likely could have been asked without it or are meaningless if we want to actually know about the quality of the product. A talk with a tester is more meaningful than looking at bug count data.

In a past project I had a very sudden bug count rise where the answer was that I assigned a new contractor to the project. She found plenty of problems within a very short time span showing her experience which is why I hired her. After a review of the bug reports and subsequent discussion with both testers it became clear that pairing with the new contractor was a good idea to ensure that the knowledge rubs off. I used the metric not to inform me about the quality of the product but as a trigger to find out something, that I could have had with a 5 minute chat with the new contractor.

That’s where I see one of the problems with looking at bug count data, replacement.  I could have spoken to the tester immediately instead of wasting it on looking at a graph first.

Another example – the drop at the end seems off to me. Again from experience something similar happened when we lost the connection to the database – a network error. Since I was the first one in we could get the problem fixed before the rest of the team arrived by looking at the bug count. Note that we later added Splunk for proper monitoring not only of the operational machines but for the test environment as well rather than relying on this crutch. This problems was caught as a lucky accident, not by design.

The important part here is not the bug count but asking the right questions and implementing the proper tools.

Counting bugs usually has a very low initial cost as it’s provided with most bug tracking tools. This is one of the reasons it’s so widespread in addition to how easy it is to read meaning into the numbers.

At the time I made it very clear to the test team that they are not judged based on the number of bugs they find or that this would even find a way into their reviews which is the case in some places. If everyone understands what data we have in front of us and more importantly, what it doesn’t tell us it can be a useful tool in the context of your project. If we remind us of our biases we are less likely to interpret something into the data that is not there.

Will I continue looking at the bug count? Probably but the emphasis is now on finding the real and context information.

To make sure everyone is paying attention, the graph that you see above is not a bug count graph but an edited user count graph. So just because it looks like what you’re expecting doesn’t mean that’s what it is. In short, the numbers are meaningless. Context information is what we really should be looking for.

Happy hunting, not for bugs but for the right questions to ask.