Fishing for Wisdom

I just came back from a week at the AYE Conference. My head is full with several new ideas swimming around and stirring up half-baked old ideas - which is a good thing.

One of the thoughts causing my head to spin came from a session one evening where we discussed ideas to improve the conference moving forward. Johanna Rothman led the session and at one point she mentioned that the AYE workshop sessions included pure [Virginia] Satir [ideas, models, etc.] and applied Satir. This got me thinking about some of the subtle differences I had noticed about the sessions and what they meant to me.

In particular, some of the ideas and models I learned from the AYE sessions appear to dwell longer in my mind and apply to a broader spectrum of situations while others seem to be more specific - i.e. an application of a model in a particular context. Don't get me wrong, whether you choose to attend a pure Satir or applied Satir workshop at AYE (and the sessions aren't labelled as such because it doesn't really matter), it's a win-win scenario. :) Sure, different hosts have different styles, but each session is different every time so you sometimes see people attend the same session again to see what new insights they get.

So, what's the big deal here? Why did I get stuck on a small point like this? Well, it reminded me of the time when I was in Teacher's College in the mid-90's, preparing to become a High School Science teacher.

Using MS Outlook to support SBTM

Okay, to recap, Session-Based Test Management (SBTM) is a test framework to help you manage and measure your Exploratory Testing (ET) effort. There are 4 basic elements that make this work: (1) Charter or mission (the purpose that drives the current testing effort), (2) Time-boxed periods (the 'sessions'), (3) Reviewable result, and (4) Debrief. There are many different ways that you might implement or apply these elements in your team or testing projects.

Let's take a look at tracking the testing effort from strictly a Project Management perspective. Years ago, when I first became a test manager, I was introduced to the idea of the 60% 'productive' work day as a factor to consider when estimating effort applied to project schedules. That is, in a typical 8-hour workday you don't really get 8 complete, full hours of work from someone. I don't believe it's mentally possible to get that. The brain needs a break, as does the body, and there are many natural distractions in the workplace (meetings, email, breaks, support calls, stability of the code or environments, and so on), so the reality is that the number of productive working hours for each employee is actually something less than the total number of hours they're physically present in the workplace.

That 'productivity' factor changes with each person, their role and responsibilities, the number of projects in the queue, and so on. Applying some statistical averaging to my past experiences, I find that 60% seems about right for a tester dedicated to a single project. I have worked with some teams that have been more productive and some much less.

So what does this look like? If we consider an 8-hour day, 60% is 4.8 hours. I'm going to toss in an extra 15 minute break or distraction and say that it works out to about 4.5 hours of productive work from a focussed employee in a typical 8-hour day. Again, it depends on the person and the tasks that they're performing, so this is just an averaging factor.

Test-Driven Development isn't new

I used TDD as an analogy to a tester today to explain how logging bugs in a bug tracking system drives the development. A bug report represents a failing test (when you verify that it's really a bug that is) according to some stakeholder need/want.

In Test-Driven Development, the programmer writes/automates the test first that represents the user story that the customer/user wants. The test fails. The programmer then writes enough code required to pass the test and then moves on. (refactoring code along the way, etc..)

It's much the same with regular system testing (i.e. in the absence of agile/TDD practices) where a tester identifies and logs a bug in the bug tracking system. One difference is that these bug reports/tests aren't always automated. (Okay, I've never seen anyone automate these bug reports/tests before but I like to believe that some companies/dev teams out there actually do do this.) That doesn't change the fact that a bug report is the failing test. Even if it's a manual test, it drives the development change and then the bug report is checked/retested to see that the fix works as expected.

Bug regression testing, then, is a requirement for good testing and system/software development, not an option.

So, while the agile practices of TDD and others may seem new, I see this one as a retelling of a common tester-programmer practice. If anything, I see TDD as an opportunity to tighten/shorten/quicken the loop between testing feedback and development. With practice, TDD helps programmers develop the skills and habits they need to create code and systems with confidence -- to know that as the system grows, the specific needs of the customers are being met every step along the way. No one gets left behind.

How can we, as testers, help? If your programmers don't practice TDD or automate tests, start investigating ways that you can do this. Investigate Open Source scripting languages. Engage your programmers in discussions of testability of the interfaces. There are many articles and presentations on the internet on the topics of test/check automation, frameworks and Domain Specific Languages (DSL).

Start reading. Participate in discussions (in real life and online). Start developing scripting skills (I recommend Ruby, of course, especially to the tester newbie). If you don't feel confident with your programming skills, help hire someone onto your test team that can help all the testers advance their skills, knowledge, and productivity in that area.

Be the Quality Advocate by putting your words into practice. You want your programmers to start practicing TDD? Show them how you can do it. You are already doing it - scripting/automating the checks that demonstrate a bug failure is just the next step.

Start by automating a single bug failure. Take it from there.

Why New Year's Resolutions Fail

Someone recently said something to me that made me think. He said that all New Year's resolutions fail because they come at the wrong time.

You know what I mean by New Year's resolutions, right? It's those promises you make to yourself, and maybe to others, right around the end of December that you will change or improve yourself in some way in the new year.

The sentiment may not be wrong, but the timing certainly is. The argument made was that January 1st isn't really the start of the new year - September is. You see, here in North America, whether you are in school or not, most businesses revolve around a "school year" structure of September to June, with July and August being the summer holiday months.

So, if September is the start of the year, we can't make promises to change something in January. That's like starting a 2-week sprint (in Agile Development) and saying half-way through that you are going to have completely new objectives. It doesn't work that way. You already committed to delivering certain goals during the Sprint Planning session at the start.

What's that? What if you didn't set any goals at the beginning of the Sprint/Year in September? Doesn't matter. The Sprint/year started anyway and you are in the middle of it. There's no way you are easily going to shift your life in a totally new direction half way through.

So, the moral of the story is: if you want to make New Year's resolutions, make them in August, not in December. That way you are more likely to follow through with them as the year progresses.

Hm, interesting.

Of course, life changing events can happen any time. You don't need to make a resolution of any kind to change yourself and how you get along in the world. You just need to see yourself how you want to be, and live like you've already reached that goal.

Testing Software is like Making Love

Work with me for a minute here.

One of the things I dislike about a recent trend in the software testing profession is the lack of analogies and examples that relate to me as an adult. Yes, it's interesting that children are innocent, curious and are natural explorers, but they are also naive, inexperienced and cannot reason or abstract like adults can. I don't find it helpful to me or when training new testers to say you need to be more like children. I'm an adult, so how can you help me now? Do you have an analogy that relates to me as an adult?

Here's an analogy that I think might help.

Testing software is like making love.

What does that mean? For a start, what's the difference between 'making love' and 'having sex'? I think the big difference is caring about the person you are with and wanting to satisfy their needs.

Is there just one way to do it? No. Every person's needs are different, just like every project we help test is unique.