As an Agile/Testing consultant, trainer and coach there are certain questions I hear often. One of them is: "What tool should we use for test automation?"
My response is usually the same: "What do you want to automate? What kinds of tests or checks?"
Very often I see people and teams make the mistake of trying to find the silver bullet.. the one tool to automate everything. Alas, there is no such tool and I don't believe there ever will be. At least, not in the foreseeable future.
It's about this time that I usually look for a whiteboard or piece of paper so I can sketch a few diagrams to help illustrate the problem. There are two diagrams in particular that I like to use. (Aside: for reference, I first publicly presented this at the DevReach 2012 conference, and it is part of the Agile/Testing classes I teach. This is the first time I am blogging the topic since I am often asked about it.)
Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts
Agile Testing vs Exploratory Testing
This month I will be doing two sets of training sessions - one for Agile Testing (AT) and one for Exploratory Testing (ET). I have extensive experience with both and I enjoy teaching and coaching both activities. This is the first time that I have been asked to do both training sessions at basically the same time, so I have been giving the relationship between the two some thought.
Over the past decade I have sometimes struggled with the difference between the two. I admit that I have on occasion even described them as being the same thing. Having given it some thought, I now think differently.
The insight came to me when I had a conversation with someone about both topics recently. The courses I'm doing have different audiences, different outlines, and different expected behaviours/outcomes. Yes, there is some overlap, but not much.
I have written previously about what I believe is ET (it's an interesting post - I suggest you take a moment to read it if you haven't already). Near the end of that article, I mention Agile Software Development and the Agile Testing Quadrants, so there is some relationship between the two.
ET is sometimes described as an exemplar of the Context-Driven Software Testing community. If you read through the seven basic principles of the Context-Driven Testing (CDT) School you may see where there are similarities in the philosophies between it and the values & principles listed in the Agile manifesto. Things like:
So what are some of the differences?
Over the past decade I have sometimes struggled with the difference between the two. I admit that I have on occasion even described them as being the same thing. Having given it some thought, I now think differently.
The insight came to me when I had a conversation with someone about both topics recently. The courses I'm doing have different audiences, different outlines, and different expected behaviours/outcomes. Yes, there is some overlap, but not much.
I have written previously about what I believe is ET (it's an interesting post - I suggest you take a moment to read it if you haven't already). Near the end of that article, I mention Agile Software Development and the Agile Testing Quadrants, so there is some relationship between the two.
ET is sometimes described as an exemplar of the Context-Driven Software Testing community. If you read through the seven basic principles of the Context-Driven Testing (CDT) School you may see where there are similarities in the philosophies between it and the values & principles listed in the Agile manifesto. Things like:
- CDT: People work together to solve the problem
- Individuals and interactions over processes and tools
- ...face-to-face conversation
- Business people and developers work together daily throughout the project
- CDT: Projects are often unpredictable
- Respond to change
- Welcome changing requirements, even late in development...
- Deliver working software frequently...
- CDT: No best practices
- Continuous attention to technical excellence and good design ...
- The best architectures, requirements, and designs emerge from self-organizing teams ...
So what are some of the differences?
Test Management is Wrong
Test Management is wrong. There. I said it.
I can't believe it took me this long to notice the obvious. If you are doing Software Development in any fashion, and are worried about how to manage your testing to develop a "quality" product, stop it.
Let's be clear about what I mean here. If you consider any software development life cycle (SDLC) process, you will find activities like the following arranged in some fashion or other:
These activities aren't Waterfall or Agile or anything else, they are just activities. HOW you choose to do them will reflect your SDLC. I don't care about that right now. The part I'm picking on is the Testing bit near the middle, regardless of whether you do them in an agile, Waterfall, or some other way.
In particular, I am picking on the fallacy or myth that a good Test Management plan/process is what you need to develop and release a high Quality product.
I can't believe it took me this long to notice the obvious. If you are doing Software Development in any fashion, and are worried about how to manage your testing to develop a "quality" product, stop it.
Let's be clear about what I mean here. If you consider any software development life cycle (SDLC) process, you will find activities like the following arranged in some fashion or other:
- Requirements gathering, specification
- Software design
- Implementation and Integration
- Testing (or Validation)
- Deployment
- Lather, Rinse, Repeat (i.e. Maintain, Enhance, Fix, Mangle, Spin, and so on)
These activities aren't Waterfall or Agile or anything else, they are just activities. HOW you choose to do them will reflect your SDLC. I don't care about that right now. The part I'm picking on is the Testing bit near the middle, regardless of whether you do them in an agile, Waterfall, or some other way.
In particular, I am picking on the fallacy or myth that a good Test Management plan/process is what you need to develop and release a high Quality product.
Shuhari
When I work with teams to help them learn something new, I try to pay attention to a few things. Firstly, I pay attention to how people are learning, and secondly how I am teaching.
When I used to teach Physics and Chemistry in high school, one validation of 'success' often came from how the students left the classroom. Generally, teenagers often came into one of those classes the same way (at least at the start of the year): I don't want to be here, this isn't important to me, I'm not going to learn anything useful.
Okay. Gauntlet down. Let's begin.
When I used to teach Physics and Chemistry in high school, one validation of 'success' often came from how the students left the classroom. Generally, teenagers often came into one of those classes the same way (at least at the start of the year): I don't want to be here, this isn't important to me, I'm not going to learn anything useful.
Okay. Gauntlet down. Let's begin.
The Human Side of Living
As I go through life I keep noticing stories, ideas and insights into humanity and I sometimes wonder if we are meant to discover these lessons slowly or if there isn't a quicker way to learn them.
Take for example, in high school we had a really weird Religion teacher who was very Zen or meta or something, and no one got him. I mean he would use examples like "take an extension cord and plug it into itself and there you go." Huh? None of us got it. And then there would be times when he would repeatedly say things like "attack the point not the person" and that was a phrase I understood.
From him, I learned that sometimes we can meet real jerks that we can learn interesting things from. Learn to separate your feelings about what you hear and understand from the messenger. It's hard sometimes, but you can get good at this.
Take for example, in high school we had a really weird Religion teacher who was very Zen or meta or something, and no one got him. I mean he would use examples like "take an extension cord and plug it into itself and there you go." Huh? None of us got it. And then there would be times when he would repeatedly say things like "attack the point not the person" and that was a phrase I understood.
From him, I learned that sometimes we can meet real jerks that we can learn interesting things from. Learn to separate your feelings about what you hear and understand from the messenger. It's hard sometimes, but you can get good at this.
Testing is a Medium
In a few days I will be giving a presentation to the local Agile/Lean Peer 2 Peer group here in town. The group has a web site - Waterloo Agile Lean, and the announcement is also on the Communitech events page.
I noticed the posted talk descriptions are shorter than what I wrote. The Waterloo Agile Lean page has this description:
I noticed the posted talk descriptions are shorter than what I wrote. The Waterloo Agile Lean page has this description:
"This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it’s done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"The Communitech page has this description:
"Exploratory Testing is the explosive sound check that helps us see things from many directions all at once. It takes skill and practice to do well. The reward is a higher-quality, lower-risk solution that brings teams a richer understanding of the development project.
This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it's done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"
Quality Agile Metrics
I was asked recently what metrics I would collect to assess how well an agile team is improving. I paused for a moment to scan through 12 years of research, discussion, memories and experiences with Metrics on various teams, projects and companies - mostly failed experiments. My answer to the question was to state that I presently only acknowledge one Metric as being meaningful: Customer Satisfaction.
We discussed the topic further and I elaborated some more on my experiences. Regarding specific "quality" metrics, I explained that things like counting Test Cases and bug fix rates are meaningless. I also referred to the book "Implementing Lean Software Development" by Mary and Tom Poppendieck (which I highly recommend BTW) which warns against "local optimizations" because they will eventually sabotage optimization of the whole system. In other words, if I put a metric in place to try and optimize the Testing function, it doesn't mean the whole [agile] development team's efficiency will improve.
It needs to be a whole team approach to quality and value. Specific measurements and metrics often lead to gaming of the system and focus on improving the metrics rather than putting the focus on delivering quality and value. If the [whole] team is measured on the customer satisfaction, then that is what they will focus on. I have long since stopped measuring individual performance on a team.
I haven't stopped thinking about this question though, so I put this question out on Twitter this morning:
We discussed the topic further and I elaborated some more on my experiences. Regarding specific "quality" metrics, I explained that things like counting Test Cases and bug fix rates are meaningless. I also referred to the book "Implementing Lean Software Development" by Mary and Tom Poppendieck (which I highly recommend BTW) which warns against "local optimizations" because they will eventually sabotage optimization of the whole system. In other words, if I put a metric in place to try and optimize the Testing function, it doesn't mean the whole [agile] development team's efficiency will improve.
It needs to be a whole team approach to quality and value. Specific measurements and metrics often lead to gaming of the system and focus on improving the metrics rather than putting the focus on delivering quality and value. If the [whole] team is measured on the customer satisfaction, then that is what they will focus on. I have long since stopped measuring individual performance on a team.
I haven't stopped thinking about this question though, so I put this question out on Twitter this morning:
Aside from Customer Satisfaction, are there any other Quality metrics you'd recommend in an #agile environment?
Testers, Learn about Agile (and Lean)
Let me tell you about something called Dramatic Irony. You see it in movies, television shows, plays and in many other places. It happens when you (as the audience or observer) see or understand something that the main characters don't. Often times this is funny, sometimes it's not. Personally, I am one of those that likes to laugh when I see this happen.
On my learning/education quest over a decade ago, I took many different positions and roles within various IT organisations so that I could learn different aspects of Quality. I went through various phases, and the one I am least proud of was the "Quality champion." This wasn't a job title so much as a belief that (mis-)guided my actions. The role/part/perspective came mainly from believing what my employer(s) told me at the time - namely that "the QA/Test team was responsible for quality."
If you have worked in Software Development for a while, and perhaps for a larger organisation, you have likely seen someone who believes they are a Quality Champion. They don't want to see any (known) bugs go out; they check up on everyone in the team to see that they have done their reviews or had someone else inspect their work before passing it onto the next person/team; they join committees to create, document, maintain or present processes that will increase the quality of the delivered products/solutions; and so on.
Ah, the poor misguided fools. Bless their hearts.
On my learning/education quest over a decade ago, I took many different positions and roles within various IT organisations so that I could learn different aspects of Quality. I went through various phases, and the one I am least proud of was the "Quality champion." This wasn't a job title so much as a belief that (mis-)guided my actions. The role/part/perspective came mainly from believing what my employer(s) told me at the time - namely that "the QA/Test team was responsible for quality."
If you have worked in Software Development for a while, and perhaps for a larger organisation, you have likely seen someone who believes they are a Quality Champion. They don't want to see any (known) bugs go out; they check up on everyone in the team to see that they have done their reviews or had someone else inspect their work before passing it onto the next person/team; they join committees to create, document, maintain or present processes that will increase the quality of the delivered products/solutions; and so on.
Ah, the poor misguided fools. Bless their hearts.
Quality Center Must Die
It is not a matter of "if" -- it is a matter of "when" HP's Quality Center software will die. And you, my dear readers will help make that happen.
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.
But I am getting ahead of myself. Let's rewind the story a bit...
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.
Subscribe to:
Posts (Atom)