Fishing for Wisdom

I just came back from a week at the AYE Conference. My head is full with several new ideas swimming around and stirring up half-baked old ideas - which is a good thing.

One of the thoughts causing my head to spin came from a session one evening where we discussed ideas to improve the conference moving forward. Johanna Rothman led the session and at one point she mentioned that the AYE workshop sessions included pure [Virginia] Satir [ideas, models, etc.] and applied Satir. This got me thinking about some of the subtle differences I had noticed about the sessions and what they meant to me.

In particular, some of the ideas and models I learned from the AYE sessions appear to dwell longer in my mind and apply to a broader spectrum of situations while others seem to be more specific - i.e. an application of a model in a particular context. Don't get me wrong, whether you choose to attend a pure Satir or applied Satir workshop at AYE (and the sessions aren't labelled as such because it doesn't really matter), it's a win-win scenario. :) Sure, different hosts have different styles, but each session is different every time so you sometimes see people attend the same session again to see what new insights they get.

So, what's the big deal here? Why did I get stuck on a small point like this? Well, it reminded me of the time when I was in Teacher's College in the mid-90's, preparing to become a High School Science teacher.

Using MS Outlook to support SBTM

Okay, to recap, Session-Based Test Management (SBTM) is a test framework to help you manage and measure your Exploratory Testing (ET) effort. There are 4 basic elements that make this work: (1) Charter or mission (the purpose that drives the current testing effort), (2) Time-boxed periods (the 'sessions'), (3) Reviewable result, and (4) Debrief. There are many different ways that you might implement or apply these elements in your team or testing projects.

Let's take a look at tracking the testing effort from strictly a Project Management perspective. Years ago, when I first became a test manager, I was introduced to the idea of the 60% 'productive' work day as a factor to consider when estimating effort applied to project schedules. That is, in a typical 8-hour workday you don't really get 8 complete, full hours of work from someone. I don't believe it's mentally possible to get that. The brain needs a break, as does the body, and there are many natural distractions in the workplace (meetings, email, breaks, support calls, stability of the code or environments, and so on), so the reality is that the number of productive working hours for each employee is actually something less than the total number of hours they're physically present in the workplace.

That 'productivity' factor changes with each person, their role and responsibilities, the number of projects in the queue, and so on. Applying some statistical averaging to my past experiences, I find that 60% seems about right for a tester dedicated to a single project. I have worked with some teams that have been more productive and some much less.

So what does this look like? If we consider an 8-hour day, 60% is 4.8 hours. I'm going to toss in an extra 15 minute break or distraction and say that it works out to about 4.5 hours of productive work from a focussed employee in a typical 8-hour day. Again, it depends on the person and the tasks that they're performing, so this is just an averaging factor.

Test-Driven Development isn't new

I used TDD as an analogy to a tester today to explain how logging bugs in a bug tracking system drives the development. A bug report represents a failing test (when you verify that it's really a bug that is) according to some stakeholder need/want.

In Test-Driven Development, the programmer writes/automates the test first that represents the user story that the customer/user wants. The test fails. The programmer then writes enough code required to pass the test and then moves on. (refactoring code along the way, etc..)

It's much the same with regular system testing (i.e. in the absence of agile/TDD practices) where a tester identifies and logs a bug in the bug tracking system. One difference is that these bug reports/tests aren't always automated. (Okay, I've never seen anyone automate these bug reports/tests before but I like to believe that some companies/dev teams out there actually do do this.) That doesn't change the fact that a bug report is the failing test. Even if it's a manual test, it drives the development change and then the bug report is checked/retested to see that the fix works as expected.

Bug regression testing, then, is a requirement for good testing and system/software development, not an option.

So, while the agile practices of TDD and others may seem new, I see this one as a retelling of a common tester-programmer practice. If anything, I see TDD as an opportunity to tighten/shorten/quicken the loop between testing feedback and development. With practice, TDD helps programmers develop the skills and habits they need to create code and systems with confidence -- to know that as the system grows, the specific needs of the customers are being met every step along the way. No one gets left behind.

How can we, as testers, help? If your programmers don't practice TDD or automate tests, start investigating ways that you can do this. Investigate Open Source scripting languages. Engage your programmers in discussions of testability of the interfaces. There are many articles and presentations on the internet on the topics of test/check automation, frameworks and Domain Specific Languages (DSL).

Start reading. Participate in discussions (in real life and online). Start developing scripting skills (I recommend Ruby, of course, especially to the tester newbie). If you don't feel confident with your programming skills, help hire someone onto your test team that can help all the testers advance their skills, knowledge, and productivity in that area.

Be the Quality Advocate by putting your words into practice. You want your programmers to start practicing TDD? Show them how you can do it. You are already doing it - scripting/automating the checks that demonstrate a bug failure is just the next step.

Start by automating a single bug failure. Take it from there.

Why New Year's Resolutions Fail

Someone recently said something to me that made me think. He said that all New Year's resolutions fail because they come at the wrong time.

You know what I mean by New Year's resolutions, right? It's those promises you make to yourself, and maybe to others, right around the end of December that you will change or improve yourself in some way in the new year.

The sentiment may not be wrong, but the timing certainly is. The argument made was that January 1st isn't really the start of the new year - September is. You see, here in North America, whether you are in school or not, most businesses revolve around a "school year" structure of September to June, with July and August being the summer holiday months.

So, if September is the start of the year, we can't make promises to change something in January. That's like starting a 2-week sprint (in Agile Development) and saying half-way through that you are going to have completely new objectives. It doesn't work that way. You already committed to delivering certain goals during the Sprint Planning session at the start.

What's that? What if you didn't set any goals at the beginning of the Sprint/Year in September? Doesn't matter. The Sprint/year started anyway and you are in the middle of it. There's no way you are easily going to shift your life in a totally new direction half way through.

So, the moral of the story is: if you want to make New Year's resolutions, make them in August, not in December. That way you are more likely to follow through with them as the year progresses.

Hm, interesting.

Of course, life changing events can happen any time. You don't need to make a resolution of any kind to change yourself and how you get along in the world. You just need to see yourself how you want to be, and live like you've already reached that goal.

Testing Software is like Making Love

Work with me for a minute here.

One of the things I dislike about a recent trend in the software testing profession is the lack of analogies and examples that relate to me as an adult. Yes, it's interesting that children are innocent, curious and are natural explorers, but they are also naive, inexperienced and cannot reason or abstract like adults can. I don't find it helpful to me or when training new testers to say you need to be more like children. I'm an adult, so how can you help me now? Do you have an analogy that relates to me as an adult?

Here's an analogy that I think might help.

Testing software is like making love.

What does that mean? For a start, what's the difference between 'making love' and 'having sex'? I think the big difference is caring about the person you are with and wanting to satisfy their needs.

Is there just one way to do it? No. Every person's needs are different, just like every project we help test is unique.

Happy Limerick Day! (May 12th)

The CEO where I currently work sent around the following note by email at the start of today:
Today is Limerick day. A limerick is a five-line poem with a strict form (AABBA rhyming), which intends to be witty or humorous, and is sometimes obscene with humorous intent.

Here is one from me (if you respond in kind let's stay away from the "obscene" part of the definition)!!

Of course, that challenge went answered and we have had a steady stream of limericks all day!  :)

Here were some of the ones I thought up between meetings and test sessions...

SBTM ET.XLS spreadsheet with DMY format

Ha ha!  I finally got some time to figure out how to get the ET.XLS spreadsheet to support DMY format instead of the default MDY (US) format.  It turned out to be a small change to the macros, but unfortunately required hard-coding the number of columns in the input files to make it work.  As long as you aren't changing the number of columns in the TXT files, this should work for you.  I also had to remove the forced "m/d" format on one of the worksheet tabs.

You can download a copy of the zipped spreadsheet here:  I also updated my www.staqs/sbtm page to include this file.

Last July I posted the latest tools-ruby scripts and received some feedback that I wasn't the only one with this problem.  I fixed the date formatting problem using Excel 2003, so if anyone is using an older version of Excel too bad. ;)

As for Excel 2007, I just started using it this week.  I noticed that I had to play with the Macro security settings (once you make the 'Developer' toolbar visible), and then I got it to work.  I'll be playing with this a bit more in the coming days so if an update is required I'll post it here to let you know.

If you find this updated file helpful, please drop me a line to let me know. Thanks.



I attended the first independent TED event in Waterloo (TEDxWaterloo) yesterday, 25 February 2010. The theme was "Tomorrow StarTED Yesterday." The web site and twitter account page have lots of great info if you're interested to know more about the event and speakers. There's even a nice photo blog of the event at

So what can I add that hasn't already been said? Well, I can tell you what the event meant to me.

Now with minty-fresh visitor counter

Someone suggested to me this past weekend that I add a visitor counter to this blog. It's one of the most common suggestions made to me over the years and I don't know why.

Back in the mid-90's I had a web site with a counter. It was novel then. I played around with different fonts and features and watched it go up over time. I don't have that site anymore. I set up a new web site about 7 years ago, but adding a counter wasn't one of the important things on my to-do list. Foolish? Dunno. Maybe. Maybe not.

Is Quality measured in numbers?

Anyhoo, I won't go there today. ;-) I'm not going to think philosophically about it right now. I'll reserve thinking and judgment about the visitor counter for a later date.

This is just a placeholder note to indicate that the counter started today.

Cheers! Paul.

SBTM is not ET

There's a subtle but important distinction that I'd like to talk about. Session-Based Testing is *not* Exploratory Testing. Please stop using those terms interchangeably because they're not.

Exploratory Testing (ET) is a testing approach that puts the emphasis on real-time learning, test design and test execution, as opposed to a more "scripted" approach that puts the emphasis on the separation of these activities - separated in time, space, and usually with copious amounts of documented artifacts.

When I first started in I.T. over 20 years ago, any testing I did as part of my programming contracts were exploratory in nature. I didn't call it 'ET' at the time and I certainly didn't approach it with the same discipline and formality that I do today. Back then, Programming was my main focus and testing was just something I did as required along the way. Ten years later (or about 12 years ago depending on your perspective), I took a workshop class on "Test Case Design" with Ross Collard. That was an amazing class that opened my eyes to a whole new world of analysis and problem solving that I didn't know before. Cool!

After that workshop, I had plenty of opportunities to practice what I learned, try new techniques and tools, and explore additional testing ideas thrown out onto the just-budding software testing mailing lists. One of the things we discussed in Ross' class was the role of "ad hoc" or informal testing. I don't have access to the data, but some study-or-other at the time (90's sometime?) showed that ad hoc testing failed to produce the same amount of testing coverage that formal test design analysis would.

Okay, I buy that. To paraphrase: guessing ideas off the top of your head consistently produced less coverage than having some structured analytical approaches/techniques/heuristics/models at your disposal. Okay. I don't need a formal study to tell me that.

Time - Bane or Innovation Catalyst?

Time. What time is it? How much time do we have? When do you want/need it? What's the deadline? I need more time!

If we had all the time in the world for software development, would the delivered results really be of better quality?

A co-worker at a past employer wrote the following when someone sent an email submission for a fun, internal contest the day after the deadline:
The contest ended a long time ago. Trying to submit something now is like submitting your late university assignment.
One of my profs told me:
"I don't care if you have something that's better than all the works of Shakespeare. If you can't get it in before the deadline it's worth nothing to me."

Ha, ha. It was intended as a funny remark at the time but there's some truth in there too.

So, if someone submits an assignment "on time" but of lesser value/quality than they might produce if they had more time, would they still continue to work on their opus or would they give it up to move onto the next project? Do we (as a collective group of intelligent human beings) lose out by putting Time ahead of Quality?

What I learned about Testing from a crazy ex-girlfriend

I was reminiscing with a tester colleague today about how our mothers used to mess with our stuff when we were younger and how it really got on our nerves.

Picture the scene: you have a desk in your room that's plastered with papers and stuff everywhere. And you know precisely where everything is. It's your mess after all.

Enter the mom. She looks around, maybe she's come in to drop off some laundry or to complain about the state of your room or whatever. You aren't around. She starts to tidy. She tidies the papers on your desk and arranges your action figures/books/pencils/Lego/rubber band collection/whatever into a neat arrangement of some kind.

You return. "Ahhhh! Where's my stuff?!?! You changed the order! I can't find anything now! Don't touch my stuff!!"

Your mom, now hurt because she was "only trying to help," vows to never touch your stuff again unless someone's life depends on it. Maybe. We'll see next week.

What skill does Exploratory Testing require?

I've just been challenged with a sobering reality.

I've heard the term "Exploratory Testing" used many times over the last few years by developers and testers at various gatherings. I've practiced it myself for over 6 years in various black-box system testing efforts. When training new testers on my team, I provided them with foundational concepts in context, risk, scientific method, test techniques and communication. Then over the course of several weeks, I reviewed their test sessions and provided feedback during debrief sessions to improve their understanding and application of the various testing skills required to be efficient and effective.

People have told me that I have really high standards, and perhaps I do. To me, testing is a passion and fun, and quality is an ideal achieved through effective communication and interactions with all the stakeholders on a project.

But that's all besides the point. If the question is "what is Exploratory Testing and how do you do it?" then my standards and expectations from team members are irrelevant.

ET is simply an approach to testing software where the tests are not predefined and the focus is on iterative learning, test design and execution (to paraphrase a simplified definition).

How someone learns, how someone designs tests, how someone executes those tests - these things are not defined by any standard; they are applied differently by different people. ET can be performed by anyone. There aren't any requirements for how well or thoroughly someone should perform it.

To quote from the animated movie "Ratatouille": "Anyone can cook. But I realize, only now do I truly understand what he meant. Not everyone can become a great artist, but a great artist can come from anywhere."

So, when I hear the term ET thrown around, I have about as much understanding of how they're testing as I do from a development shop that uses the term "Agile". That is, I don't know anything about what it means to them, how they're applying it, how effective it is, or how it compares to my standards/expectations.

I've been reading articles and research lately comparing ET and Test Case-driven Testing (TCT) approaches, and it never ceases to amaze me how stats and research may be twisted to support everyone's beliefs about which is better than the other.

Developers and Product Managers who have worked with me understand the quality of the information and feedback that my testing style provides. They have said that it is a whole new level of testing feedback they've never seen before. It makes me feel good to hear that - that I'm providing a valuable service.

But when I read one of these comparison articles, I have to assume that the ET applied in the research studies aren't at the same level that I apply it. I have to accept that. I may not like it, but that's the reality. To me, the same research applied would likely show that Agile and Waterfall aren't really all that different in terms of produced output. Sigh.

Am I missing something?