Last weekend I presented at the Toronto Workshop on Software Testing (TWST). The topic for the workshop this year was "Coaching, Mentoring and Training Software Testers" and I decided to present a process that I had developed with my team to help manage a peer review process. For us, that's a coaching opportunity, so I thought it was relevant to the theme. There was some excitement and hoopla over the presentation that had me baffled for a bit though.
My presentation had some specific content and charts and things which I know were "new" because, well, we developed them. That was the point of sharing it with colleagues. The thing that had me stumped/confused was at a higher level than that though.
You see, I didn't think there was anything particularly new about the whole idea. Our team follows a process that, as someone reminded me, was initially developed 10 years ago for some specific company's needs. That initial presentation is on the web for all to see and I came across it a long time ago.
The high-level process description fits in with our current company's needs and has been working well for us for several years now. In all the articles and descriptions that I've found online, there was just one piece that wasn't described too well.. okay, at all. To clarify the specific example here, I'm talking about Session-Based Test Management (SBTM) - a test management framework to help manage and measure your Exploratory Testing (ET) effort.
Overall, the process has worked for us in our present company/context, but the "Debrief" step/phase is a bit 'vague' or 'light' in description compared to the other important elements. As it happens, I have a teaching (B.Ed) degree, coupled with my 10+ years of experience, so I feel I am able to improvise this step fairly well.
But one thing always bugged me about that step - it didn't fit in with the overall flow of the rest of the framework. That is, according the the framework 'activity hierarchy' you're either testing or your not. Well... I don't think that it's that clear cut. What about the Debrief step? Is that Testing? Or is it clearly "not"? I don't think it's either, but closer to the 'testing' side if I had to pick one.
Okay, so if I look at it as closer to the testing side, then does it fit in with the overall framework? Actually, no. It's an oddball. The catch here is that the basic SBTM framework helps you manage and measure the ET effort, but nothing about the Debrief step. Wait a minute! Hold the phone. Why not?!
So we, as a team, came up with a process for managing and measuring the Debrief step that follows the same basic format as the testing part. When I look at it, it makes sense. It looks more like a complete picture to me now.
For a number of people at the workshop last week, our process seemed like a novel/fresh way of looking at it. I can't see that actually. As far as I can tell, our team is still following the same basic outline and process described over a decade ago. We just filled in some of the blanks in a way that we thought was consistent with the rest of the framework.
When people ask you to "think outside the box" they mean to think in an unconventional way to solve some problem. To me, I looked inside the box and noticed something was incomplete/missing. My solution, as far as I'm concerned, was "thinking inside the box."
Having said that, when I step back from the process to see what the big picture looks like, I feel a bit like Alfred Nobel after inventing dynamite. The reaction from my colleagues at the workshop was kind of like that too - explosive! (It was quite cool actually. I don't recall the last time I've seen such a flurry of excitement and questions in such a short time!)
I don't know what to do with this process right now. I'm changing it to put some "safeties" in place because I recognise the dangers of allowing people to easily generate "metrics" that represent the quality of a tester's work and learning.
Sigh. There is one thing I've gotten from all of this. I have more learning to do. This time, I know it is clearly in the field of Psychology, although I'm not yet sure where to start. I need to understand this Debrief dynamite that we've invented - what it means and how it can be used for good and not for evil in the hands of fools.
I think the new process is working (for me) though, because it allows me to ask new questions that I haven't thought to ask before. For instance, if ET is the simultaneous learning, design and test execution (by one definition), then managing the debrief step helps me to track a tester's learning. Am I really tracking learning though? Am I observing someone's level of interest and attention to detail? How much they care? What are the implications of poor quality work that doesn't improve over time? Should that tester move onto something else? Should I leave them alone if training and reinforcement doesn't help and they do generally "okay" work? ... ?
Aside from the details and all of these interesting new questions, it was just surprising to me that no one has come up with something similar already. I looked inside the box. I filled in the missing piece using a similar-looking piece. I don't see that as being unconventional.
Sometimes you don't have to go out of your way to come up with something fresh. Maybe no one has gotten around to looking at all the corners of the box yet. Maybe someone meant to but got distracted and didn't return. Maybe there's an opportunity waiting. Maybe you are the one to see it.
Have you looked in your box yet? Sometimes a good tester, like an inventor or explorer, is just thorough.
No comments:
Post a Comment