Can Practitioners write Academic Quality papers?

I've thought about the idea of the AST Journal for some time now. In principle, I really like the idea. One thing that I've worried/wondered about though is the idea of writing a paper that stands up to academic scrutiny (or pretty close to it anyway).

Today, I happened to notice the following Quote Of The Day in the weekly StickyLetter (from StickyMinds.com):
"Science is supposedly the method by which we stand on the shoulders of those who came before us. In computer science, we all are standing on each other's feet."
~ Dr. Gerald J. Popek

At first it made me laugh, then it made me wonder about what it would take to write a good paper. I've been reading some good articles and papers on Software Testing lately and if there's one thing I know it's that I don't have the time to do all the required research to produce a really good paper.

I, like perhaps many experienced testers, learned my craft by doing. I picked up ideas here and there over the last 15 years (a conference presentation here, an email thread there, a passing conversation with a colleague or manager, and so on), that when applied I started to notice the patterns of what works and what doesn't. I then began to build up some notion of the importance of contextualisation in my successes and failures.

I've written small snippets of perhaps good ideas and thoughts in the past, and I've communicated some other good ideas during workshops and presentations, and I like the idea of sharing knowledge. One thing I don't think I have the time to do, though, is go through all the previously published material to see who thought of what first. It may be important for research historians, but I don't really believe that I have the time or resources available to me to really do a proper job of it. It almost seems weird or absurd to me from one perspective too.. who thought of it first? "Well, I thought of idea 'foo' all on my own. I can list you all of the experiences, conditions and factors that led to these inferences and the outcomes of the applications of these thoughts. I didn't read the idea anywhere, so how can I attribute the idea to someone else?"

So where would I begin to look in the published literature to reference who actually came up with the same idea or portions of it before me? Or worse. What if to do a really good job of it, you need to reference articles and papers from across various disciplines ( i.e. computer science, psychology, education, engineering, philosophy, sociology, etc.)? That is, what if the profession of Software Testing is really just the centre of a whirlwind of various professions and disciplines all combining into patterns that we each interpret in different ways to successfully complete the tasks before us? How would you know that you've referenced enough people or ideas to do a proper job in your paper?

I just feel so overwhelmed at the prospect sometimes. It's not writer's block.. it's the thought that spending a day or two articulating a few good ideas and the contexts in which they seemed to be successful for me might require weeks of research to support in good academic fashion. And even then, I know I would likely miss some other good referenceable point or idea or person.

Is it possible to do a good job writing a good paper and still have a day job? Perhaps. Is it possible to do a good job writing a good paper and still have a life? I don't think I could. Maybe we all are standing around on each others feet sometimes. So how do we get past this? How do we turn all this information into knowledge so that we can have some progress? How do we help the next generation so that they don't have to reinvent all of the same ideas that we've had to discover on our own over the last three decades?

Learning not to be the best to win friends

When I was young, I developed a habit that I don't really know how I got started on. I'm sure some Shrink could probably extract it from my memories through some quality sofa time, but I'm not really interested in that. The problem is that the habit stuck and it seriously affected how I interacted with others.. but I didn't notice right away. I would have had to have been paying attention to notice. Unfortunately, I only developed that observation skill much later in life.

So here's the thing: I was a perfectionist. I know what you're thinking - every tester says that. Well, I was pretty methodical in my approach and fairly obsessive about it too, right from an early age. If I didn't get perfect on a Math test, I practiced the questions I got wrong until I always got them correct. If there was skill that I didn't excel at, I practiced until I got them down - from video games to languages, from sports to mechanics, from music to cooking ... and so on and on. I was only limited by the resources available to me and as a result I became quite good at a lot of things and a real Information Investigator too. By the time I was twelve I could navigate any and all of the libraries in the city of Toronto (it's a big city). By the time I was in my teens (in the mid-1980's), I learned to navigate the newly developing electronic information systems using a modem hooked up to my Atari 8-bit computer. The advent of the Internet in the 90's was an absolute dream for me! I was an information junkie and I loved to learn. It became a habit: learn something, learn as much about it as possible, get really good at it.

One day I discovered that there's a more efficient way of describing such a person: a know-it-all. Ouch. Kind of harsh. What was worse was discovering that my girlfriends were intimidated by how much I knew and how well I did at school. That was stupid, right? I mean learning was like a video game for me. I did it to see how much I could cram into my brain before I got a brain cramp or ran out of information sources. I didn't do it to feel superior or make anyone else feel inferior. It was just who I was and what I did.

Well that sucked. I wanted people to like me. So I did what came naturally - I pretended to be dumb and intentionally did worse in school. That became my new game - to try and intentionally not do my best, not be perfect. The problem was that my habit was still there, so I had to keep finding ways of hiding what I knew. My teachers didn't like this new turn of events, of course, but I had some good friends and good times so I didn't really care what the teachers thought.

That worked okay for a while I guess. I only thought about dumbing down when I cared about making a good impression. When you're a teenager, that's not really all the time. ;-)

Fast-forward a decade. After graduating from University (finally! they had to force me out because I didn't want to leave :-) ), I discovered a similar but more distressing dynamic in people interactions in the workplace. So here's a question - don't you want to work with people who are good at what they do? Don't you want to be the best at what you do? Apparently, I discovered, a lot of people kind of don't really care all that much about work. It's just a day job that pays the bills for them. And that whole perfectionist thing I've got going on not only intimidates some people, but it also (apparently) makes them look and feel bad.

Well that sucks. Again.

Oh ya, and it gets worse. If you don't kiss the butts of the people in upper management, lie to them and praise them and make them look better than you, you're not likely going to advance within your organisation. It was at this point that I discovered another trait that I didn't know I had - integrity. Basically, anyone who wanted me to kiss their butts could just go ahead and kiss mine. :-b

I wasn't about to dumb myself down in the workplace. Certainly not for some management position amongst all the other self-gratifying, ego-centric, self-praising half-wits. No way. Beware the people you surround yourself with indeed!

Of course, a new lesson I learned was how to balance Integrity and Tact (diplomacy) so as to maintain working relationships. After all, I still wanted a paycheck!

Over the course of several years and several companies, I explored and observed the various nuances of interpersonal dynamics in the workplace with regards to the impact of a product and/or technical expert. Basically, I keep getting better at what I do so I've had to learn how to be good at what I do without making anyone else feel bad about themselves and not sacrificing my integrity when dealing with self-absorbed managerial staff. It's not easy, but I love a challenge! =)

So, why do I bring this up now? Today I had a conversation with my boss - just a regular one-on-one chat that we have every other month at work - and he said something that reminded me of that old habit. He said that other people at work have noticed how much more relaxed and approachable I've been lately.

I know that when I first started at this company, I was a little gung-ho on the whole perfectionist thing (again) but I thought I had turned down the volume on that habit. Some of it is left-over from my Software Quality Assurance days as a "Quality Crusader". I try to remain focussed on Software Testing these days, but it's hard to keep my mouth shut sometimes when people are doing blatantly improvable tasks, and since I have a stake in the success of this company I want to see everyone doing a good job - err, well, the best job they can be doing at the time, that is. Alright? =)

I know I had turned down the volume on that aspect of myself, so I had to think for a minute about why my boss was suddenly remarking on the observation that other people had noticed I was more relaxed around the office.

There was something, an event, that happened last October (2006) that has changed my life forever. It's still a bit too personal to mention here right now, but needless to say it got me thinking about where I was spending most of my time. Last summer I had spent too much overtime at work, and perhaps that got me a bit more stressed than usual -- and was likely the comparison benchmark for my boss' observation.

Since November 2006 my attitude towards work (in general) has shifted again. I kind of don't care about it anymore. I think something inside me snapped. Don't get me wrong, I still love Software Testing and still want to keep getting better at it, it's just that now I think I've finally broken free from the expectation of perfection in others. The transference of my own preferences onto others is a dangerous thing in sneaky and subtle ways that you don't usually see coming.

The reality is that I've got more important things to worry about than what other people should care about. If someone cares about what I think, then that's nice, I'll offer my opinion. Otherwise, do whatever the heck you want so long as it doesn't affect my ability to get my job done.

I've read some good stories and articles over the years, and one day I think I would still like to work in an environment where I could have a mentor that I could learn from. Someone that would expect me to be better than who I am and help me to reach my potential. Somewhere where I could demand the best from my team members and actually get that quality because they care too.

Right now I'm content to work with people I like and who like and support me. That means a lot too.


Never Test Before 4

Kind of a silly thought, I know, but it keeps coming back.

I work in a small agile development environment. Development works according to 2-week cycles to complete chunks of code. I keep noticing that anything prior to Cycle 4 or 5 is usually incomplete and unstable for testing. The first several cycles are when all the foundational architectural changes are usually happening.

So we can never really test before (cycle) 4. That's fine. I've got these Ruby scripts to keep me busy in the meanwhile. =)

Observation on the Proofreader Effect

I've been working on some performance test scripts using Ruby (Watir actually) over the last few weeks, and have been happily rewriting the scripts I first wrote a year ago. (Programmers call this activity refactoring.) I've learnt a lot about Ruby and scripting web apps over the last year. One of the biggest helps came when I read Brian Marick's new book "Everyday Scripting with Ruby". Thanks to that book, my performance test scripts are really slick now and look more like a programmer wrote them. But I digress..

The thing that I've been thinking about over the last few days is the problem of testing the scripts that I've written. Any good tester would never trust a programmer to write error-free code, so why should I trust myself to? But then who should test my scripts? Well, there really isn't anyone else around who can right now so I have to do it myself. Is that a problem? I don't think so.

I'm the biggest stakeholder who cares about these scripts working correctly, while my boss is mostly interested in the numbers and my analysis. So I ran the scripts and worked out the kinks one section at a time until I was able to run them straight through several times without error.

Is that good enough testing? Well, I got the coverage and measurements I wanted, so I guess so. The scripts don't have to be perfect, they just need to give me the data I need. So, it's all good.

Right. I completed the analysis for this run and then started to compare the numbers against the benchmark numbers from last year. It wasn't until several hours later that I noticed a typo in the output. Eek!

I'll just sneak back into the code and fix that. No one saw that. I'll just re-run the scripts and make sure the output looks "clean" this time. Great! Looks fine now.

So how did I miss that typo? I thought about this for a while. I think the proof-reader effect is like a FIFO buffer. That is, I don't think I could have seen this bug until I got the other bigger bugs out of the way.. you know, like the ones that prevented the script from completing or collecting the data I needed in the first place.

First in, First out. Get the big ones out of the way and then I can see the smaller ones that were hiding underneath. The typo was always there but I was just temporarily blinded to it because my attention was so focussed on the bigger fish.

So was I unqualified to test my own code? I don't think so. I caught all the bugs I cared about. It just took me a few days to find them. Would a separate tester have found the typo before me? Maybe, maybe not. The FIFO effect only affected *my* ability to see the little things until the bigger ones were out of the way because I was the one who wrote the scripts. A separate tester would have a different perspective and shouldn't be affected by this FIFO/Proofreader Effect in the same way.

We do Exploratory Testing almost exclusively on our products. When I test, I don't see the same effect happening to me. It's just a matter of time until I get to a feature or page and then I hit it like a whirlwind and move on. It's quite cool and effective. Defect finding rate starting to slow down? Switch to another Test Technique - voilĂ ! More bugs. All the Risks addressed? Move on.

I've seen a number of conversations happening on some of the message boards questioning whether or not a programmer is able to test his or her own code. After this recent experience, I think if the desire is there and there is enough time, then yes, she should be able to find all the bugs that matter.

Once again, a separate pair of eyes not constrained by the FIFO effect would likely speed up the process. Nothing we didn't already know. A Tester helps you to find the bugs that matter sooner rather than later. Well, a good one will anyway.

Sometimes "Good Enough" isn't good enough

I've been a big fan of the idea of "Good Enough" software testing over the last decade. Rather than thinking that the problem of doing good Software Testing is akin to "Digital" technology with it's complete, precise values, I've thought of it more like "Analog" technology with the big dials and reasonable, approximate (and cheaper) signals.

This past week, I've watched my seven year old son play with a new LEGO set that he got for Christmas. It's a neat mechanical lego set that lets him build a motorised helicopter, cool car, or attack crab thingy. (ASIDE: I can't begin to imagine what the Marketing team's conversation was like when they thought up that last one!) I noticed when he completed the helicopter and turned on the motor, that it didn't sound right to me. So I went over and took a close look at his creation. It looked correct. There didn't seem to be any missing pieces, but when he turned it on again, I noticed that not all of the gears turned together consistently. I picked it up and took a really good look at it. Not knowing much about how it was built, I just randomly squeezed together lego pieces that weren't tightly packed together whenever I came across them.

There was one set of lego pieces that had a gap of about a millimetre. When I squeezed them together, it made a (good) snap sound. I asked my son to turn on the motor again and this time it not only sounded correct, but the gears all worked together in perfect synch also. Voila!

I thought about this for a few moments afterwards. Up until then, my son had worked on the premise that if the lego pieces were reasonably attached, that it was "good enough". He didn't need to have a tight fit between every single piece to see the finished product. I mean, it looked like the complete picture of the helicopter in the instruction manual, so what difference would a small gap between a few pieces make?

In this case it made a big difference. If it needs to work like clockwork, then "good enough" is probably not enough.

So what's the tie in to Software Testing? Well, just how scalable is the "Good Enough" approach? For me, it's always been about testing to the most important Risks and using whatever tools and techniques seem appropriate to the situation at hand. It's always seemed kind of foolproof to me.

Maybe my Digital/Analog analogy is a flawed one. I mean, Analog technology has its limits and is not very scalable. Digital technology is more precise and can handle more information. Is there a point when a Digital solution gets so large that it requires an Analog approach again? (I think the answer here is 'yes.')

Is there a time when "good enough" needs to be replaced with a more complete, structured or methodical approach to software testing? I can't think of any situations like that right now, but that doesn't mean there aren't any. That is, I can't think of a time when I wouldn't want to say that good software testing has to strike a balance between the economics, quality and time to market for a product or system. Shipping with bugs is okay if you know that they aren't critical or life-threatening.

So perhaps "good enough" doesn't always apply when we're dealing with real-world objects like lego creations, automobiles, watches, et cetera. I think that it still holds pretty well to the virtual world of software testing. Until someone can give me a good example or two of when "good enough" wouldn't be good enough for testing software, I think I'll chalk this up to another distinction between testing software and testing hardware.