Testers Create Bugs, Not Bug Reports!

I've been thinking a lot about this phrase lately: "Your thoughts shape your reality." I think it's from a Buddhist teaching but I'm not sure because I read it as a second-hand quote somewhere that I can't recall right now.

I'm also familiar with the tester's lament: "hey, I didn't put the bug in the code, I just report it."

But what about the phrase: "if a tree falls in the forest and no one is around to hear it, does it make a sound?" That one has always annoyed me. The way I see it, the answer to the question depends mostly on how you define what a sound is. If a sound requires a listener to be real, then, no, no sound was made.

What about a bug? If there is a bug in the software but no one encounters it, is there really a bug in the software?

Now, I report bugs. A lot of bugs. I employ many creative and imaginative tricks to find them. They love me. They come to me even when I don't expect it. One of the tricks I use is to change my perspective on how I look at software. I also exploit vague areas, or areas of doubt within the development team, development processes or methods of communication. For example, if it looks like there might be some vague statement in a specification that could be confusing or interpreted in different ways, I'm pretty confident there will be lots of bugs there.

But hold on. What if I don't go looking for the bugs? What if I don't report them and no customer ever sees the problem or complains about it? Does the bug really exist?

Or does the mere act of identifying this discrepancy between intention and execution create the entity that I call a bug? Did I create it or was it already there?

Scenario: A spec-writer wrote a document that was interpreted in some way by one or more programmers. According to their interpretation, the implementation (the software) is working as intended. They check that their code measures up to their understanding (unit testing?) and then build the software into a deliverable product.

Then along comes a tester. A tester looks at it in a completely different way and suddenly the software is full of bugs. But wait a minute! The bugs weren't there until someone looked and said they were there.

This reminds me of Schrödinger's cat. Is the cat alive? Is it dead? Is it both alive and dead, or neither? You have to look to find out, but once you've looked you've changed everything. What a psych trip!

So, what if finding bugs falls into the same category? In that case, the mere act of observing has changed the reality of what's in the box. By looking inside and asking the question, we have shaped the reality by our thoughts.

Doesn't that mean that we have therefore created the bugs that we report? The programmers didn't put the bugs there. They interpreted the specs and design documents according to their thoughts and shaped the software according to their reality. They built no bugs.

The bugs were called into existence when we, the testers, said they were there with our mind tricks and clever tools.

The bug report, therefore, is just a record of how we have reshaped reality with our thoughts in a way that is different from how reality looked before we started testing. We created those bugs in the bug reports.

One might argue, then, that if you make the bug reports go away you make the bugs go away. If no one (customer) sees the bug, does it really exist?


That's a change in perspective from what I was taught about testing. I don't report bugs. I create them!

I Am The Arrow

One of my favourite television series was called Babylon 5. Good stuff. From time to time I recall particularly well-written scenes and moments in that series that seem to apply well to some situation happening in my life. That happened again today.

I had a meeting today that changed everything. I've spent the last decade searching for answers that apparently weren't there. Today I found a well, no a sea, of good answers. It's going to take me a while to assimilate everything and find a way to communicate it back to people. It's not the right piece of information, it's the right way of looking at things.

I said to someone this afternoon that this is the first time in 9 years that I've really learned something NEW. Something that makes me think. Something that makes me think in a new way.

For the first time in almost a decade, the phrase that comes to mind is "I am the arrow." I know where I'm going and what I need to do.

The context for that phrase is from an episode in Babylon 5 - Season 3, episode 61: "War Without End (Part Two)" There was this scene where Sinclair is absolutely certain he knows what is going to happen (in his future which is really everyone else's past - Time paradox stuff) and what he needs to do. That scene has always haunted me. It's not every day that you know with such certainty exactly what you need to do.

In all fairness, that has happened to me once before. It was the day that I started the relationship with my wife -- that was almost 20 years ago. I had no doubt. I was absolutely certain.

Unfortunately, moments like that don't come around very often. I needed to put a little note here to mark the occasion that it happened to me again today. It feels good for a change - to know the solution to a decade-long problem.


Happy St. Patrick's Day! =)

Ross Collard's Tea-Test (Technique)

I was reviewing the test notes for one of my testers today when I came across an interesting note. I decided to look up the problem report in the bug tracking system to read the details. The bug report said that if you go to particular new page in a web app (currently in dev't) and enter some information, wait 35 minutes, and then press a button to continue, you get an error. If you wait less than 30 minutes there is no error, and if you wait over 60 minutes then the application will timeout (as expected), so you have to wait just the right amount of time.

Suddenly I started laughing out loud saying: "Hey, it's the Tea Test!"

I called over the tester whose notes I was reviewing to thank him for his good work in finding, isolating and reporting the bug and to tell him this story. You see, there are many kinds of test techniques out there. At the very least, most programmers and testers have heard about BVA and Equivalence Classes, and some testers who take an interest in their profession learn about other techniques as well. In our test team, we can rattle off at least a dozen techniques at any given time and we are usually selecting from among a pool of 30 or more techniques on any given project, but that's not important right now.

What was important at this moment was that the technique that I could apply to the bug found wasn't on any list I had ever seen, but it was a technique that I knew about.

Back in the summer of 2003, I drove down to Virginia to attend a special 5-day "Black-Box Software Testing" workshop offered by both Cem Kaner and James Bach. It was a great opportunity and I didn't want to miss it. Much to my surprise and delight, Ross Collard had also come to attend the course. Ross had taught me my first courses in Test Case Design and Test Management some 5 years prior, and it is information I still use to this day.

One day during the BBST workshop Cem and James asked the participants to name some test techniques. Ross offered two that I hadn't heard before. The first was the "shoe test". That's where you take off your shoe and put it on the keyboard. Then you wait to see how the app handles the non-stop input.

I've seen this technique happen in real life. I've seen someone lean back against their desk while talking to others not realising that they were leaning on the keyboard. Another time, I saw someone put a magazine on their desk which accidentally landed partly on the keyboard and proceeded to cause a beeping noise from the computer as the keyboard input cache filled up.

This can be an interesting technique as you ponder which key on the keyboard to place your 'shoe' for maximum effect in the App Under Test. For example, how well do you think your web app can handle you pressing [F5] to refresh the page non-stop? Can you find a web page that makes several database calls and then try [F5] repeatedly again? Think you can bring down a database server by doing this? [evil grin]

The second technique Ross mentioned was the "tea test". I hadn't heard of that one before, so I asked if the "T-test" was related to the Statistics t-test. He said "no, it wasn't." He said that what he would do here is enter some input in an app, get up from the desk, walk over to make a cup of tea, have the tea, walk back and enter the next input. And he punctuated this by saying that since he doesn't walk very fast these days, this process could take anywhere from 30-40 minutes. Ha! That was funny. I hadn't heard of any test technique like that before.

Fast forward to today. That was exactly the amount of time (30-40 mins) that my tester had to wait for the bug to appear! The tester told me how he had spent over an hour trying to reproduce that bug in a background VMWare session, so that he could continue with other testing while waiting for the right amount of time to pass. We both laughed at how the "Tea test" applied here.

The developer assigned to fix the bug (who sits on the other side of the desk partition from me) must have overheard me telling this story. He piped up and said: "I hate that bug! I have to wait a long time to try and reproduce it!" We laughed harder. =D

This was the first time I've seen Ross' "Tea test" actually work -- i.e. actually find a bug. I thought it was just a joke at the time. I now know there's truth in that technique. It's not that I really doubted Ross, it's just that he's a funny guy and sometimes you can't tell when he's pulling your leg. =)

Ross, you really are the Test Master. Next time I see you, the tea is on me. Cheers!

Something Interesting about Reporting Bugs

I just happened by this link just now: http://cwe.mitre.org/ for the "Common Weakness Enumeration" (CWE) project.

Here's the blurb:
"International in scope and free for public use, CWE™ provides a unified, measurable set of software weaknesses that will enable more effective discussion, description, selection, and use of software security tools and services that can find these weaknesses in source code."

I'll have to look into this later. Don't know anything about it yet.

To be a Good Tester you must think like a Scientist

It's funny how many times this particular analogy keeps coming up. The comparison between Testers and Scientists, and the similarity between testing and the Scientific Method.

Most recently it occurred to me while reading one of my son's chapter books. FYI: "chapter" books are short books broken into chapters for kids just beginning to read on their own. Usually for kids ~ 7-8 years old. These aren't Harry Potter books.. most of them barely reach 80 pages.

The book that caught my attention is called "Jigsaw Jones #9: The Case of the Stinky Science Project" by James Preller. The main characters are in Grade 2 and in this particular story their teacher was giving a Science lesson:
"The world is full of mystery. Scientists try to discover the truth. They ask questions. They investigate. They try to learn facts. Scientists do this by using the scientific method."

The teacher then handed out sheets of paper which read:

THE SCIENTIFIC METHOD

1. Identify the problem. What do you want to know?
2. Gather information. What do you already know?
3. Make a prediction. What do you think will happen?
4. Test the prediction. Experiment!
5. Draw a conclusion based on what you learned. Why did the experiment work out the way it did?

Back when I used to teach High School Physics, I recall giving a set of steps very much like this one. I might have used the word "inferences" instead of "conclusion" but otherwise it's a pretty good list.

When you think about testing software, generally you run through the same process and set of questions. If you don't think about each of these questions, then you're probably not doing something right.

For example, here are some questions that come to mind when I think of the Scientific Method applied to testing software:

1. Identify the problem.
  • What are the risks?
  • What is the particular feature of interest?
  • What is it you want/need to 'test' and 'why'?
2. Gather information.
  • What references are around to tell you how something should work? (e.g. Online Help, manuals, specifications, requirements, standards, etc.)
  • What inferences can you deduce (or guess) about how something should work? (i.e. based on your experiences testing similar apps, or other parts of the same system, etc.)
  • What can you determine by asking other people? (e.g. customers, programmers, subject-matter experts, etc.)
3. Make a prediction.
  • Design your tests.
  • What is your hypothesis?
  • What are the expected results?
  • Think about any assumptions or biases that might influence what you observe. How can you compensate for these?
4. Test the prediction.
  • Set up the environment
  • Execute the tests
  • Be creative! Make as many observations as you can.
  • Collect data
5. Draw a conclusion based on what you learned.
  • Did you observe the expected result? Does this mean the test passed? Are you sure?
  • If the test didn't turn up the predicted result, does this mean the test failed? Are you sure?
  • Revise the test design and any assumptions based on what you observe.
  • Do you have a better understanding of the risks that drove the test in the first place?
  • Do you have any new questions or ideas of risks as a result of this test?
  • If you collect a lot of data, summarise it in a chart that can help demonstrate the trend or pattern of interest.
  • Write a few words to describe what these results mean to you. (You might not have all the information, but don't worry about that. Just say what you think it means.)

In general, I find the Scientific Method to be a very good guideline for both beginners and experienced testers alike. Wikipedia has some entries on the Scientific Method as well as a Portal. I think it's a good read. I'd recommend those pages to anyone serious about becoming a good tester.

If there are things on those pages that you aren't sure about, look them up! You might just learn something new about how to think about things that will help you do your job better.

Happy Learning!