Sometimes "Good Enough" isn't good enough

I've been a big fan of the idea of "Good Enough" software testing over the last decade. Rather than thinking that the problem of doing good Software Testing is akin to "Digital" technology with it's complete, precise values, I've thought of it more like "Analog" technology with the big dials and reasonable, approximate (and cheaper) signals.

This past week, I've watched my seven year old son play with a new LEGO set that he got for Christmas. It's a neat mechanical lego set that lets him build a motorised helicopter, cool car, or attack crab thingy. (ASIDE: I can't begin to imagine what the Marketing team's conversation was like when they thought up that last one!) I noticed when he completed the helicopter and turned on the motor, that it didn't sound right to me. So I went over and took a close look at his creation. It looked correct. There didn't seem to be any missing pieces, but when he turned it on again, I noticed that not all of the gears turned together consistently. I picked it up and took a really good look at it. Not knowing much about how it was built, I just randomly squeezed together lego pieces that weren't tightly packed together whenever I came across them.

There was one set of lego pieces that had a gap of about a millimetre. When I squeezed them together, it made a (good) snap sound. I asked my son to turn on the motor again and this time it not only sounded correct, but the gears all worked together in perfect synch also. Voila!

I thought about this for a few moments afterwards. Up until then, my son had worked on the premise that if the lego pieces were reasonably attached, that it was "good enough". He didn't need to have a tight fit between every single piece to see the finished product. I mean, it looked like the complete picture of the helicopter in the instruction manual, so what difference would a small gap between a few pieces make?

In this case it made a big difference. If it needs to work like clockwork, then "good enough" is probably not enough.

So what's the tie in to Software Testing? Well, just how scalable is the "Good Enough" approach? For me, it's always been about testing to the most important Risks and using whatever tools and techniques seem appropriate to the situation at hand. It's always seemed kind of foolproof to me.

Maybe my Digital/Analog analogy is a flawed one. I mean, Analog technology has its limits and is not very scalable. Digital technology is more precise and can handle more information. Is there a point when a Digital solution gets so large that it requires an Analog approach again? (I think the answer here is 'yes.')

Is there a time when "good enough" needs to be replaced with a more complete, structured or methodical approach to software testing? I can't think of any situations like that right now, but that doesn't mean there aren't any. That is, I can't think of a time when I wouldn't want to say that good software testing has to strike a balance between the economics, quality and time to market for a product or system. Shipping with bugs is okay if you know that they aren't critical or life-threatening.

So perhaps "good enough" doesn't always apply when we're dealing with real-world objects like lego creations, automobiles, watches, et cetera. I think that it still holds pretty well to the virtual world of software testing. Until someone can give me a good example or two of when "good enough" wouldn't be good enough for testing software, I think I'll chalk this up to another distinction between testing software and testing hardware.

2 comments:

  1. I think you are echoing a common misconception about "good enough" software and "good enough" testing. Words like "adequate" and "sufficient" and "good enough" mean what they say.

    "Good enough" is about doing what is needed to make something that is actually, in fact, genuinely really good enough. But not better, or not much better, unless better is free. Beyond "good enough" is not "good". Beyond "good enough" is gold-plated.

    Do we sometimes need a heavier-weight process? Yes, for many reasons. Will a heavier-weight process actually increase quality over a well-applied lightweight strategy? I'm skeptical, but it's possible. But if a different process was needed to achieve a needed level of documentation (for litigation support, for example) or some other required attribute of the software, then unless we include that process, we're probably not doing "good enough" development.

    ReplyDelete
  2. You wrote "If it needs to work like clockwork, then 'good enough' is probably not enough."

    In the context of the toy, the flawed construction was good enough, at least for the primary user, your son. It didn't need to work like clockwork. If the gadget is a crucial part of a safety-critical device, then what you had was not good enough.

    I agree with Cem's assessment about "good enough". Unfortunately, "good enough" has negative connotations, something like "Well, it's good enough to get the job done, but I still don't like it." It's the connotations that trip us up when we think about the concept. Maybe "adequate" or "sufficient" is a better adjective. I'd suggest "boring", because the alternative is usually "frustrating", but of course we can't avoid the connotations with that either. These words bring us to one of the important points - making something more than "adequate" might not be a good use of our resources, as long as we really understand what "adequate" (or "good enough") is. But we can easily agree that inadequate software is, well, inadequate.

    ReplyDelete