tag:blogger.com,1999:blog-94214942024-03-07T21:35:38.289-05:00Lessons Learned by a Software TesterPaulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.comBlogger79125tag:blogger.com,1999:blog-9421494.post-29640935426973030792015-04-28T01:59:00.001-04:002015-04-28T01:59:36.601-04:00Agile Testing AutomationAs an Agile/Testing consultant, trainer and coach there are certain questions I hear often. One of them is: "What tool should we use for test automation?"<br />
<br />
My response is usually the same: "What do you want to automate? What kinds of tests or checks?"<br />
<br />
Very often I see people and teams make the mistake of trying to find the silver bullet.. the one tool to automate <i>everything</i>. Alas, there is no such tool and I don't believe there ever will be. At least, not in the foreseeable future.<br />
<br />
It's about this time that I usually look for a whiteboard or piece of paper so I can sketch a few diagrams to help illustrate the problem. There are two diagrams in particular that I like to use. (<i>Aside: for reference, I first publicly presented this at the <a href="http://staqs.com/pubs/DevReach_Quality_Foundations_PC2012.pdf" target="_blank">DevReach 2012 conference</a>, and it is part of the Agile/Testing classes I teach. This is the first time I am blogging the topic since I am </i><i><i>often </i>asked about it.</i>)<br />
<br />
<a name='more'></a>The first diagram I draw is Brian Marick's <a href="http://www.exampler.com/old-blog/2003/08/21/#agile-testing-project-1" target="_blank">Agile Testing Matrix</a>. In his blog post, he drew a 2x2 matrix as a way to categorise the different kinds of testing you might do on an agile development team. With these categories, you can see the relationships between the type of work certain team members usually do and the kind of value they provide to the project.<br />
<br />
This model was popularised by Lisa Crispin and Janet Gregory in their 2009 book "Agile Testing" and it took on the name "Agile Testing Quadrants." Here is a version of their quadrants for reference:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5kYt0d62hxQJFVtZiTeRGIU_phH2vxStUjvBpZw8ocaWPmNGVNAlUEQbiXc-gptK2LeuYb7M0fGF1JmWiKJ3Lj8tg6me1LOhhKiuJ_Jd40Q5WStDoY3piOYP-_LIKeDAuMMuH/s1600/Agile+Test+Quadrants_2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5kYt0d62hxQJFVtZiTeRGIU_phH2vxStUjvBpZw8ocaWPmNGVNAlUEQbiXc-gptK2LeuYb7M0fGF1JmWiKJ3Lj8tg6me1LOhhKiuJ_Jd40Q5WStDoY3piOYP-_LIKeDAuMMuH/s1600/Agile+Test+Quadrants_2.png" /></a></div>
<br />
This is a fascinating way to categorise the different kinds of testing and I use this to help people in different roles relate to the different feedback activities performed to deliver <b><i>working</i></b> software. When I look at this matrix, I often highlight certain features. One of them is the vertical line separating the right and left halves:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX-6PujR2WDAtwjf5tXNfTwddCKZjG-K2ikIG9rnzZciwarhXQzKeyIiSEJhxByzV3_BvmcD90hB4noS-LZrfoVkzwNkLh00J1dENg27EUKeI5A3BwtVZg1zXkEoj8oIq2F3Qf/s1600/Agile+Test+Quadrants+-+V&V.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX-6PujR2WDAtwjf5tXNfTwddCKZjG-K2ikIG9rnzZciwarhXQzKeyIiSEJhxByzV3_BvmcD90hB4noS-LZrfoVkzwNkLh00J1dENg27EUKeI5A3BwtVZg1zXkEoj8oIq2F3Qf/s1600/Agile+Test+Quadrants+-+V&V.png" /></a></div>
The testing activities on the Left side are what we sometimes call <i>Verification</i> tests, or the kinds of tests we perform to check that we "built it right." We do these activities to ensure high levels of <i>internal</i> quality of the software. If we are going to look to automate any kinds of testing, it's most likely to be from this side of the box. (For those of you in Testing circles, we might call this side "checking.")<br />
<br />
The <i>Validation</i> tests on the Right side help us check that we "built the right thing." These cover <i>externally</i>-visible aspects of quality. A lot of the feedback on this side has two very noticeable traits:<br />
<ol>
<li>We generally can't automate these kinds of tests.</li>
<ul>
<li>We need people to drive them and surprise us with their feedback.</li>
<li>We might leverage computers, tools and automation to help us do pieces, but the design, assessment and decision-making definitely requires human effort and experience or skill.</li>
</ul>
<li>Feedback here very often leads to design changes in some way, shape or form.</li>
</ol>
There is an important distinction between the two V's, and any good release requires that we do testing from both sides to ensure internal and external quality. (<i>Aside: I'm not going to do a quadrant-by-quadrant breakdown at this time.</i>)<br />
<br />
The next diagram I like to reference on the topic of test automation is Mike Cohn's Test Automation Pyramid:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGIK7KJIjYvAT5gY6dvkcMqAVydDHnJAsRxz5ctsZwfkLEHbKSjcSexPPBMBd4eOnIj-7n9pmGxPMuPFazjzrqPbVl8pv4AZ75WCGHS9JTaO1MGPeCyU-Ae9Re_ul44qYEX8nr/s1600/Testpyramid_colour.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjGIK7KJIjYvAT5gY6dvkcMqAVydDHnJAsRxz5ctsZwfkLEHbKSjcSexPPBMBd4eOnIj-7n9pmGxPMuPFazjzrqPbVl8pv4AZ75WCGHS9JTaO1MGPeCyU-Ae9Re_ul44qYEX8nr/s1600/Testpyramid_colour.png" height="224" width="320" /></a></div>
<br />
This is a simplification of the problem of test automation to help us understand the relationships between the different kinds of automated tests or checks. In general, we should have many more unit or code-level tests than high-level end-to-end tests that run through the UI. We expect the low-level tests to run quickly and often, so they are naturally our first line of defense to help us build maintainable <i>working</i> software.<br />
<br />
The top of the pyramid represents the more complex and slower-to-run tests that Testers generally create and maintain. These automated checks are important to help us know that the system appears to be running as expected, however, they are also harder to troubleshoot when something goes wrong.<br />
<br />
So how do these two models fit together? Like this:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO35a_GEKhUCXa93669cs6TKDBk6ghgYw4XmtOhYb9VsHFJ7lCES82-_zTY479Sm59c5GMf2E73FRSY6856v-REd1I2tuN_sZP_L3ydlMH-mUyxupzldoVVJjvkkqyp24r-Bp_/s1600/Agile+Test+Quadrants_2_with_Pyramid.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO35a_GEKhUCXa93669cs6TKDBk6ghgYw4XmtOhYb9VsHFJ7lCES82-_zTY479Sm59c5GMf2E73FRSY6856v-REd1I2tuN_sZP_L3ydlMH-mUyxupzldoVVJjvkkqyp24r-Bp_/s1600/Agile+Test+Quadrants_2_with_Pyramid.png" /></a></div>
<br />
We can see that just about everything in Q1 is covered by the test automation pyramid. Devs, if you aren't automating your unit and service-level tests, you are doing it wrong. If you want to talk about tools for test automation, this is likely the best place to start. Automation here has the biggest ROI to help you on your agile transition.<br />
<br />
The next important observation is in Q2, the place where Testers typically self-identify themselves. The key here is that NOT EVERYTHING CAN OR SHOULD BE AUTOMATED HERE! The closer you get to business-facing tests, the more you require human intelligence to interpret results in real-time. Sorry people, the human brain outperforms machines most of the time with these kinds of problems, so this is where good Exploratory Testing practices can really help you to learn how to test things efficiently and effectively. Gotta learn to use your brain to do the best work here and make good choices on what kinds of checks are worth automating.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVQZG4UQcYp0TaHjbyAJ7muXC7JG5lBbMVJeBIlNYwFDeD-eU0FFOIhfGNiM53ShyoKg5Ujmyn3KDt2qBxgv7DgRmwJ5E-_58apw5E6ypujwPo2GG7jkrzeyZmKyDuDz_td90X/s1600/Agile+Test+Quadrants_2_with_Pyramid_and_comments.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVQZG4UQcYp0TaHjbyAJ7muXC7JG5lBbMVJeBIlNYwFDeD-eU0FFOIhfGNiM53ShyoKg5Ujmyn3KDt2qBxgv7DgRmwJ5E-_58apw5E6ypujwPo2GG7jkrzeyZmKyDuDz_td90X/s1600/Agile+Test+Quadrants_2_with_Pyramid_and_comments.png" /></a></div>
<br />
The tests in Q4 are interesting. For the most part, specialists or experts may leverage tools and automation (as required or applicable) to help them test more, but very few tests become regular automated checks. For example, a Performance Testing specialist may leverage automated scripts to simulate certain user usage patterns so they can monitor loads on various system resources (e.g. memory, CPU, network, and so on). Just because we leverage automation doesn't mean these tests are automated. However, there are a small number of checks that we might automate for benchmarking purposes to help us detect potential performance threats from one build to the next.<br />
<br />
One important point to note is that the automation tools differ widely depending on purpose and technologies, so they change from one quadrant to the next and from one layer to the next. Occasionally, we might be lucky enough to leverage the same automation tool for a few different test design approaches but <i>caveat emptor</i>!<br />
<br />
Don't let the snake-oil salesmen fool you into buying tools that they say can test everything. Similarly, you have my permission to ignore those ignorant managers who decree that you must use the same automation tool for every kind of testing you need to do. Reality doesn't work that way, so don't waste your time and the company's money on fool's errands.<br />
<br />
<i><b>Know</b></i> what you need the automation tools to do (ask if you are unsure), and consult with the skilled practitioners in those fields of development (yes, testers, this includes you) to help you assess fitness for use.<br />
<br />
So, what tool should we use for test automation? The right one.<br />
<br />
Let's do some research into the problem we're trying to solve and learn from others who have done work in this field. Allow yourself time to learn and experiment with tools to check their suitability. Tools are rarely perfect, so expect some time for customising tools to your specific needs. Demo what you learn to your team and get the whole team working together to fit the tools into the development cycle.<br />
<br />
We succeed as a team, so never go it alone in an automation effort. Remember that test automation is a development task -- i.e. programming is involved and all the supporting activities that go with it (like maintenance, version control, coding standards, and so on). Always be sure to include and involve your developers in any automated testing evaluations and efforts.<br />
<br />
Test automation enables development teams to go faster. Treat your test automation code on par with your production code. It may save you, your company or your customers one day. Give it some thought.<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com8tag:blogger.com,1999:blog-9421494.post-8365507830521880902015-04-09T00:14:00.003-04:002015-04-09T00:15:52.439-04:00Where would you go?An old friend reached out to me recently to ask me some questions about Ruby, a very handy programming language to know if you are in testing, development or IT/Ops. During the conversation I mentioned that I am presently working as an Agile Technical Coach, and not in a Testing role of some kind.<br />
<br />
Two things came to my mind during the conversation: (1) I am happy to have moved on from Testing (i.e. from doing it to teaching it), and (2) I went to some lengths for the job I have now.<br />
<br />
After 20 years in various Testing and QA roles, I am happy to have moved onto coaching, consulting and training. I know when it's time to leave the development game to a new crowd with a fresh pair of eyes.<br />
<br />
There are two things in my professional life that I love doing: Teaching and Testing. I did the latter for a long time, so I am pleased to focus on Teaching for a while. Unfortunately, it's still quite clear to me that many people are getting into Software Development without much knowledge (if any) of formal Testing and Quality practices. Sigh. That's too bad. It seems I will have many teaching and coaching opportunities in this field for a while yet.<br />
<br />
<a name='more'></a>For those who may be wondering why I stepped into the Agile community rather than remain solely in the Testing one, the answer is simple. In agile teams, everyone tests. All the time. Test all the things! There is simply more opportunity for me to help people, teams and organisations with good testing practices in agile teams than in dedicated testing teams alone (like in a Waterfall org or Testing Centre of Excellence of some kind).<br />
<br />
Testing means more in agile teams and takes many different forms. I have grown and learned a lot more about Testing than I ever knew since I started working with agile teams.<br />
<br />
My second thought (above) was about <i><b>where</b></i> I am currently working. My current employer is about 1,000 km away from home and my family. There are no direct flights and I am in a different country. On a good day, it takes me 2 flights and about 9-10 hours to get home.<br />
<br />
Thinking about my friend in Europe, I wondered how far I could get in Europe with 2 flights and 10 hours. I figure I can likely get to just about anywhere in Europe. =) Maybe I should consider working in Europe for a while?<br />
<br />
Back to North America.. yes, working as a consultant away from family is hard. I find I have more free time in the evenings and yet I still haven't caught up with the books I want to read, the books I need to write, or even updated my web site. I am still running ragged, and doing what I can just to keep up with some of the conferences I wish to attend.<br />
<br />
Why don't I work closer to home? Great question. I know many Canadians working in the U.S. right now doing similar work to me. In short, if I could get good work closer to home I would certainly take it. I miss many friends and colleagues from my home town and nearby areas. I enjoy making new friends and meeting people here that I might not otherwise have the chance to meet.<br />
<br />
Most importantly, I am happy doing what I love. I love working with people eager to learn and improve how they deliver software - to do it better, faster and more reliably. I see development teams coming together and working in ways they never imagined, and I wonder where they will go in the future. I know I am making a difference with the people I work with and that makes it worthwhile. <br />
<br />
So, a question for you: geographically speaking, how far would you go to do work that you love? What would you sacrifice?<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com0tag:blogger.com,1999:blog-9421494.post-81715233884808209492014-02-19T00:38:00.000-05:002014-02-19T00:43:59.874-05:00Agile Testing vs Exploratory TestingThis month I will be doing two sets of training sessions - one for Agile Testing (AT) and one for Exploratory Testing (ET). I have extensive experience with both and I enjoy teaching and coaching both activities. This is the first time that I have been asked to do both training sessions at basically the same time, so I have been giving the relationship between the two some thought.<br />
<br />
Over the past decade I have sometimes struggled with the difference between the two. I admit that I have on occasion even described them as being the same thing. Having given it some thought, I now think differently.<br />
<br />
The insight came to me when I had a conversation with someone about both topics recently. The courses I'm doing have different audiences, different outlines, and different expected behaviours/outcomes. Yes, there is some overlap, but not much.<br />
<br />
I have written previously about <a href="https://swtester.blogspot.com/2012/05/what-is-exploratory-testing.html" target="_blank">what I believe is ET</a> (it's an interesting post - I suggest you take a moment to read it if you haven't already). Near the end of that article, I mention <a href="http://agilemanifesto.org/" target="_blank">Agile Software Development</a> and the <i>Agile Testing Quadrants</i>, so there is some relationship between the two.<br />
<br />
ET is sometimes described as an <i>exemplar </i>of the <a href="http://context-driven-testing.com/" target="_blank">Context-Driven Software Testing</a> community. If you read through the seven basic principles of the Context-Driven Testing (CDT) School you may see where there are similarities in the philosophies between it and the values & principles listed in the Agile manifesto. Things like:<br />
<ul>
<li>CDT: People work together to solve the problem</li>
<ul>
<li>Individuals and interactions over processes and tools</li>
<li>...face-to-face conversation </li>
<li>Business people and developers work together daily throughout the project</li>
</ul>
<li>CDT: Projects are often unpredictable</li>
<ul>
<li>Respond to change</li>
<li>Welcome changing requirements, even late in development... </li>
<li>Deliver working software frequently...</li>
</ul>
<li> CDT: No best practices</li>
<ul>
<li>Continuous attention to technical excellence and good design ...</li>
<li>The best architectures, requirements, and designs emerge from self-organizing teams ...</li>
</ul>
</ul>
And we could probably make a few more connections between Agile and the CDT principles.<br />
<br />
So what are some of the differences?<br />
<a name='more'></a><br />
For a start, I learned and practiced ET long before the Agile Manifesto was written. That tells me that I am not necessarily doing AT any time I am doing ET. Let me say that again: if you are doing ET, that's great, but it doesn't mean you or your team are agile.<br />
<br />
ET is something you can choose to do when you want to say "yes" to thoughtful, mindful testing that takes into account not only the people and the context involved in the project, but also the desire to provide meaningful qualitative and quantitative feedback that cannot [presently] be automated by a computer.<br />
<br />
Allow me to illustrate an example of an ET effort compared to a standard test approach on a given project.<br />
<br />
Let's say a particular "test case" has a step that asks you answer the question: "<b>What time is it?</b>"<br />
<br />
A) A standard/traditional or automated test result might simply be: <br />
<ul>
<li>5:30 pm</li>
</ul>
... and then proceed onto the next step of the test case, the next test case, or whatever. "Moving on" is the point and you would have done it long before the start of this sentence...<br />
<br />
B) An exploratory tester may provide an answer along these lines:<br />
<ul>
<li>5:30 pm ... or do you want "PM"?</li>
<li>17:30</li>
<li>Do we need seconds? Do we need fractions of seconds?</li>
<li>5:30:42 PM EST on Thursday, 12 February 2014.</li>
<li>Wait, do we care about Date/Time stamp, or just the time?</li>
<li>What if I have a sundial? Is an approximation good enough?</li>
<li>Sunset, dusk</li>
<li>"It's Tiny Talent Time" (okay, that dates me ;))</li>
<li>It's Miller time. (or whatever other beer brand you may prefer)</li>
<li>Hammer Time!! (then bust a move..)</li>
<li>Winter time (or other appropriate season)</li>
<li>Dinner time.</li>
<li>Banana time. (<i>aside: cheers to the old EA Tools team! </i>;)) </li>
<li>Overtime. Are we getting overtime pay? Is someone ordering food? ...</li>
<li>Hm, how will this input be used in the system? Are there some boundary conditions I can play with that will expose potential downstream failures?</li>
<li>What kind of input validation exists on this input field? </li>
<li>Can I enter <i>anything </i>into this field? Can I try some constraint attacks, XSS or SQLi inputs?</li>
<li>Is this a required field? What if I skip it completely?</li>
<li>Is there any user documentation, marketing material, or online help that provides guidance on how I should answer this question?</li>
<li>...</li>
</ul>
... and more responses than time and space permits me from listing here and now. Now some of you may think that many of these responses are "silly" or "inappropriate" and to those of you I ask: <i>"In what context was the question asked? Are you sure? Could you be wrong? Does everyone on the project team have the same understanding of the question as what the users expect or need?"</i><br />
<br />
We have doubt and so any of the above responses may be valid in one context or another. Without further investigation we cannot be sure which subset of the above responses will help us discover interesting things about: (1) the system, (2) the people using it, or (3) the problem we are trying to solve with the given product or functionality.<br />
<br />
You can also see how a tremendous amount of questions and information may be generated by a single exploratory tester. This blows up really fast and will likely <i><b>slow down your progress </b></i>through any project if you pause to do this with every single field, function or feature. (NOTE: I'm not trying to discourage you from doing ET here; I want to help you set the right expectation by understanding this reality.)<br />
<br />
The point here is that I am describing <i>a process of discovery and exploration, a process of learning</i>. This is <b>an individual's story</b>. Testers usually/often (but not always) work alone, especially in pre-Agile days and even now on large-scale Waterfall-type/outsourced projects.<br />
<br />
Doing ET well requires skill and practice. There are numerous models, heuristics, techniques and tools that you need to become comfortable with. [Product/System/User/Problem/Industry/Business] Knowledge comes from effective ET practice. Improved shared knowledge and understanding among the project team members is a marker of a good ET practitioner, but that's not always the case depending the team and project dynamics.<br />
<br />
So what about Agile teams or Agile Testing (AT)?<br />
<br />
For a start, any tester who finds him/herself <i><b>alone</b></i>, sitting at a computer and <i>trying to understand </i>the context of a feature or product, just remember: YOU ARE NOT IN AN AGILE TEAM. (This situation is a clear symptom that your team needs a good agile coach.) Exploratory Testing is a crutch in this case, a kludge. It is <i><b>way more helpful</b></i> to both you and your team to use your brains (i.e. to use ET) to provide good feedback quickly, however, [structured] guesswork is a stupid and inefficient way to get by. And that goes against the intention of the Agile Manifesto.<br />
<br />
Here it is: AGILE TESTING IS A TEAM SPORT.<br />
<br />
When I teach or coach agile teams on how to deliver value more rapidly, I don't teach ET. I get the whole team together and I ask them to work together to develop shared understanding of what they want to deliver. This may include different activities, such as:<br />
<ul>
<li><a href="http://theleanstartup.com/">Lean Startup</a> (i.e. product/feature hypothesis and assumption-checking activities) </li>
<li>Story Mapping</li>
<li><a href="http://specificationbyexample.com/">Specification By Example</a> (what I consider to be the <i>exemplar</i> of Agile Testing)</li>
<li>Team pairing: </li>
<ul>
<li>Product Owner (or Business Analyst, customer...) & Developer (i.e. one or more of [designer, programmer, tester, tech writer] ) </li>
<li>Dev & Dev (i.e. select one of [designer, programmer, tester, tech writer] and add one more from that same set, even the same role)</li>
</ul>
<li>Sprint/Iteration Planning - <i><b>Definition of Done</b></i> (and "Quality") for the deliverables</li>
<li>Sprint Demonstration (i.e. take your completed, working code and give it to your customer to play with)</li>
<li>and more...</li>
</ul>
Somewhere in that above list is a Tester. A tester who understands ET will make better contributions than a more traditional-minded tester. That is a fact. I am often asked to help coach <i>QA/Test teams</i> who are excluded from "agile" development teams because they don't know how to give up their standard test documentation or processes. You don't have to be an ET practitioner though -- anyone with an open mind who is willing to adapt and learn new collaborative techniques will likely fit in.<br />
<br />
To those [traditional] testers who are worried that they don't know how to fit into an agile team, here is a quick test for you. Please look at the <a href="http://agilemanifesto.org/">Agile Manifesto</a> values and ask yourself one question: <b>Which items do you value <i>more </i>- the left or the right?</b><br />
<br />
If it's the right (i.e. processes, tools, plans, comprehensive documentation), then, yes, you should update your résumé and seek out new opportunities where you will be happier. I will coach the remaining development team members on how to <i><b>replace your </b></i>testing/checking <i><b>contributions with automated test scripts </b></i>integrated into their build process. (The truth hurts. Agile isn't for everyone. Accept it.)<br />
<br />
If it's the left (i.e. people, working software, responding to change), then you are open to learning new ways to interact with your development team members to provide value to the project. This requires courage. We are asking you to adapt to a new role, maybe more than any other development team member. A good starting point for you that may answer some questions during the transition is in Crispin & Gregory's book <a href="http://agiletester.ca/">Agile Testing</a>.<br />
<br />
There's more to AT than what you will find in that book though. Joining an agile team is making a commitment to learning. Practicing ET also requires a commitment to learning. They have this aspect in common. ET focusses on your [individual] learning efforts and what <i><b>you</b></i> can do to help provide great, timely feedback to the decision-makers. AT focusses on your ability to facilitate learning and understanding among <i><b>all the team members</b></i> to ensure that everyone is on the same page when you deliver working software to the customer.<br />
<br />
Yes, I believe that knowing ET will help make a better Agile Tester. You need to know more than just ET though. You can also learn to be a great Agile Tester without <i>formally</i> learning ET. A good agile team will provide you with the feedback you need to help you grow.<br />
<br />
What do you think? Does this help clarify the differences and similarities?Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com9tag:blogger.com,1999:blog-9421494.post-732554592170093712014-01-16T22:55:00.001-05:002014-01-16T22:57:44.820-05:00TestathonI received an email recently about an event happening later this month in London, UK. It's the world's first official <b>Testathon</b> (<a href="http://testathon.co/" target="_blank">testathon.co</a>). The site describes the event as "like a hackathon but for testers. You’ll be testing apps in teams with some of the best testers in the world." I know some of the judges and think this will be a fantastic opportunity for those who can attend.<br />
<br />
When I received this notice my first thought was: this is a really cool thing and I should tell people about it. My second thought was: I don't normally write about conferences so do I blog about this or not? Well, yes, I decided to blog about it.<br />
<br />
In the "Context-Driven" Software Testing community, actions speak louder than words. That's one of the reasons that certifications (like those from the ISTQB and QAI) are treated with low regard and even disdain from some people in the testing community. The main issue here is that these paper transactions (certifications) emphasize memory-work over hands-on practice. <i><b>Here's a Quick Acceptance Test:</b></i> does the [particular] certification reflect (1) a level of demonstrable competence and ability in the desired field, or (2) the ability to spend money and regurgitate specific knowledge without context?<br />
<br />
<a name='more'></a><br />
So what is the response from the CD Testing community? Get together and test things! Practice makes perfect, so practice and learn from your peers and colleagues.<br />
<br />
One of my favourite social ideas from the past decade has to be "<a href="http://weekendtesting.com/" target="_blank">Weekend Testing</a>". The basic idea is to get together online and do some group testing sessions. It's a good opportunity to practice testing techniques, learn to communicate with other testers, and provide some valuable feedback to the sponsor companies providing the software to test (when applicable).<br />
<br />
At some testing conferences we are also beginning to see Testing Labs appear. These are dedicated areas with computers, laptops, and other electronic things that you can get your hands on and test in a pressure-free environment. My friend and colleague <a href="http://www.wadewachs.com/" target="_blank">Wade Wachs</a> and I have recently inherited the official "Test Lab Rat" (organiser/manager) mantle from a super tester and teacher, <a href="http://www.workroom-productions.com/" target="_blank">James Lyndsay</a>. James is continuing the Test Lab tradition in Europe and abroad and asked Wade and I to help manage the Test Labs in North America.<br />
<br />
The message here? Just test.<br />
<br />
Remember the old Chinese proverb: "Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand."<br />
<br />
If you happen to be in the London, UK, area later this month, attend the Testathon and get involved! Meet some fellow testers, maybe some famous ones, practice, have fun, learn and take pride in your craft.<br />
<br />
I wish the Testathon organisers much success so that we can see more events like this happen all over the world. Let's start something new, something powerful, something cool.<br />
<br />
Learning can be fun, and we need more testing professionals who see the fun, form and function in their professions. The more passionate professionals we have in the Software Development community, the quicker we can all move forward.<br />
<br />
Do you know other opportunities or events that puts practice over show and tell? Let me know.<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com3tag:blogger.com,1999:blog-9421494.post-92060320288071298802013-04-24T23:30:00.000-04:002013-04-25T00:26:07.458-04:00Salary ThievesYesterday I attended a <a href="http://waterlooagilelean.wordpress.com/" target="_blank">Waterloo Agile Lean</a> session on Story Mapping presented by <a href="http://agilepainrelief.com/" target="_blank">Mark Levison</a>. I had heard of Story Mapping but hadn't worked through it before. I liked the hands-on exercises approach Mark took to help us understand the process and benefits of that technique. This blog post is <b><i>not </i></b>about Story Mapping.<br />
<br />
I love learning and take any opportunity I can to hear different speakers present on topics that I think may be of value. Sometimes I even attend the <i>same </i>talks and workshops when they are done by different speakers so that I can understand differences of style in presentation, stories, and tips/techniques for getting the ideas across to the audience/participants. Once I even attended the same topic by the same speaker 2 days in a row, and I learned something new/different the second time around! (It was a <a href="http://kaner.com/" target="_blank">Cem Kaner</a> talk when he was in our area a few years ago.)<br />
<br />
Yes, I gained an appreciation of Story Mapping yesterday. More importantly, I learned a new anecdote from the speaker - one that got me thinking. At one point, Mark answered a question about managing a large backlog on a Scrum/Kanban-style (information radiator) board. The problem with really large backlogs of stuff to do (likely anything with more than 100 items in it), is that the items near the bottom become meaningless over time. You will probably never get to them because something more important always comes up.<br />
<br />
<a name='more'></a>Mark told us a quick story about Taiichi Ohno, the father of Lean Manufacturing. I looked up the story and found a <a href="http://www.gembapantarei.com/2010/02/excerpts_from_an_interview_with_taiichi_ohno_july_1.html" target="_blank">reference to it here</a>. In 1984, Ohno was interviewed about the history of development of the Toyota Production System (which later became Lean Manufacturing). At one point in the interview that discussed work standards and the early stages of doing kaizen (continuous improvements), Ohno said:<br />
<blockquote class="tr_bq">
<i>I tasked the shop floor leaders with regular kaizen of work methods and revision of the standard work, telling them "If the kanbans do not change for one month you are salary thieves."</i></blockquote>
During Mark's workshop yesterday, he mentioned the "yellowing of paper" [on the backlogs] as an indicator of "salary thieves." He was tying this point back to the question: how long has an item been sitting on your backlog? Maybe they are salary thieves.<br />
<br />
This is an interesting point. When I work with Testing teams, I sometimes hear the right words but see the wrong practices.<br />
<br />
For example, I often hear testers talking about Risks. That's good. They do a risk assessment and create tests for those risks but never revisit the risks. That's bad. Over time, those original risks become salary thieves. Are they still relevant? Have things changed? Are you wasting your time to put effort into managing the testing of things that are so out of date that not even the developers/programmers use them to create the code you are now testing?<br />
<br />
What about test cases? And Regression Testing? How many of those tests check risks and ideas that are no longer relevant? How many of them are salary thieves, taking away our precious time that we could have spent finding more relevant bugs and identifying new risks to stakeholder value?<br />
<br />
What about the tools that we use to support and manage our work? How many of them have sunset clauses tied to the creation dates of artefacts? For example, automatically identifying requirements in a backlog, test cases in a repository, or personas in a reference area that are "old" (by some definition) and therefore potential risks of losing value.<br />
<br />
Right. I don't know any software development tools that do this right now. It seems like a perfectly reasonable feature to look for though.<br />
<br />
When we think about the "dinky" Scrum/Kanban board with pieces of paper identifying bits of work, we realise that ageing has a very real effect on paper. We <b><i>see </i></b>the paper get old over time, the ink start to fade. If we have to rewrite a card or sticky note, we should be asking: what is the value of keeping it on the board at all?<br />
<br />
With electronic tools, things are a bit different. Electrons, unfortunately, don't show signs of wear over time. Unless we specifically ask a computer to keep track of information like this, we are likely to forget about it because of all the other things we have to do and keep track of.<br />
<br />
Physical board: 1, Electronic tools: 0.<br />
<br />
So. What are your salary thieves?<br />
<br />
What are some ideas you have for avoiding them?<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com4tag:blogger.com,1999:blog-9421494.post-18337296226268966152013-02-24T02:34:00.000-05:002013-02-24T02:42:17.299-05:00Test Management is WrongTest Management is wrong. There. I said it.<br />
<br />
I can't believe it took me this long to notice the obvious. If you are doing Software Development in any fashion, and are worried about how to manage your testing to develop a "quality" product, <a href="http://www.youtube.com/watch?v=Ow0lr63y4Mw" target="_blank">stop it</a>.<br />
<br />
Let's be clear about what I mean here. If you consider <i><b>any</b></i> <a href="http://en.wikipedia.org/wiki/Software_development_process" target="_blank">software development life cycle</a> (SDLC) process, you will find activities like the following arranged in some fashion or other:<br />
<ul>
<li>Requirements gathering, specification</li>
<li>Software design</li>
<li>Implementation and Integration</li>
<li>Testing (or Validation)</li>
<li>Deployment</li>
<li>Lather, Rinse, Repeat (i.e. Maintain, Enhance, Fix, Mangle, Spin, and so on) </li>
</ul>
<br />
These activities aren't Waterfall or Agile or anything else, they are just activities. HOW you choose to do them will reflect your SDLC. I don't care about that right now. The part I'm picking on is the Testing bit near the middle, regardless of whether you do them in an agile, Waterfall, or some other way.<br />
<br />
In particular, I am picking on the fallacy or myth that a good Test Management plan/process is what you need to develop and release a high Quality product.<br />
<a name='more'></a><br />
In almost 25 years of working in the Software/IT industry, I have never worked at a company or heard of any company that had a Test Management process so solid that it lead to high quality, I don't believe there ever will be either.<br />
<br />
Here's an analogy. Let's say I want to make some <a href="http://www.ricekrispies.com/en_US/recipes/the-original-treats.html" target="_blank">Rice Krispies treats</a>. The ingredients are: butter, marshmallows and rice krispies. You heat 'em up, mix 'em up, flatten 'em out, cool, slice and eat. That's it.<br />
<br />
Ya, but you know what? I have this really excellent <i>Marshmallow Plan </i>that will produce the Bestest treats anyone has ever tasted.<br />
<br />
How does that sound to you? Interested to learn more about the plan? Be truthful.<br />
<br />
If you said "Yes", I don't know if I can help you. If you said "No" there may be hope for you yet.<br />
<br />
Why is it that Software companies are attracted to the idea that Testing is somehow equated with Quality? Testing is <i><b>an</b></i> activity that you may choose to do or not do and still have a great quality product. The <a href="http://agilemanifesto.org/" target="_blank">Agile Manifesto</a> doesn't mention testing anywhere and yet the Manifesto's signatories, thought leaders, and practitioners produce great quality stuff for people. How does that work?<br />
<br />
Why is it only the <i>Testing</i> phase/activities/part that equates with Quality? Why aren't companies and managers everywhere promoting super awesome <i>Requirements Management </i>processes and plans at Requirements Conferences to help deliver high Quality? What about <i>Design Management</i>? No, no, wait, I got it. We need <i>Deployment Management</i>. That's it! I've solved it. Aha!<br />
<br />
No, these are all stupid suggestions. And you know it too.<br />
<br />
Is it because Testing/Verification activities are part of the other 'phases' or steps in software development, so surely we should be able to manage all that, right? Well, actually, no. That's the wrong approach.<br />
<br />
Test Management systems manage and measure <i>some</i> of the testing activities performed on a project. Completing your testing activities tells me nothing about the overall "Quality" of the product. By definition, it is an incomplete part of the picture.<br />
<br />
By putting your faith in Test Management plans or systems you are effectively saying "I don't know how to measure what you want (i.e. Quality), so I will measure what I can do (i.e. Test)."<br />
<br />
Does this mean it's pointless to track the testing you've done? NO! I am not saying that. If you do something that you believe contributes to the value of the project, then please track what you are doing so that others can see what you have done. Preferably, make your progress visible in some way.<br />
<br />
What we need to focus on is what the customer needs. Ask yourself: What problem are they trying to solve? What are we trying to deliver that is of <i><b>value</b></i> to them?<br />
<br />
What we need is <i><b>Value Management</b></i>. There are successful (i.e. "quality") products built and released to customers that never had any independent testers on the development team. Cool. These teams get it.<br />
<br />
I have a friend and colleague who <a href="http://leanstartupmachine.com/2012/02/1-workshop-100-students-20-mentors-a-life-changer/" target="_blank">started up a company</a> and within hours had sold a product that didn't even have a single line of code written. Way cool! He gets it. Did he worry about Test Management? Bah! He didn't even need to pay a programmer to sell something of value to a customer.<br />
<br />
I have mentioned it before, if you are doing any kind of testing on a software development project, you can likely place it somewhere within the <a href="http://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/" target="_blank">Agile Testing Quadrants</a>. Heck, you don't even have to be on an Agile team or project to see that your testing fits somewhere on the chart.<br />
<br />
So, <i>if</i> you somehow create a super Test Management Plan or system that tracks <i>all </i>of these activities for a given project, <i>then</i> would you have a Quality product? That would be really cool, by the way, but no, not necessarily. And don't fool yourself into thinking that it will.<br />
<br />
The problem is one of relationships. Human relationships.<br />
<br />
The definition of Quality I like to use comes from <a href="http://secretsofconsulting.blogspot.ca/2012/09/agile-and-definition-of-quality.html" target="_blank">Jerry Weinberg</a> and it is based on your understanding of relationships between people. The definition of "software" that I use when teaching/training/coaching is that software reflects your understanding of the relationships between people and the systems they interact with to meet particular needs. The definition of Testing that I like to work with is the intelligent use of models and heuristics to explore the relationships between the people designing a system and those who intend to use it.<br />
<br />
In short, if you want to understand what provides "quality" or value to your customers, look to how you are managing the human relationships. Forget Test Management. You might have better luck implementing a <a href="http://en.wikipedia.org/wiki/Customer_relationship_management" target="_blank">CRM</a> system in your development departments as a predictor of Quality.<br />
<br />
Testers: if you want to do a great job, focus on the testing activities that explore the systems from the perspectives of the people who matter. Never hide or bury your work (i.e. in documents, spreadsheets, test management systems, etc.). Make it visible - use dashboards, mind maps or other visual mediums because they assist with team collaboration and understanding. Keep detailed records archived somewhere if you need them, but don't worry about "managing" to those fiddly bits as an indicator of "quality" - because they're not.<br />
<br />
Forget about the Test Plans though. Disregard Test Management systems. AVOID providing any metrics on testing coverage as an indicator of Quality... especially if your development team isn't producing equally misleading metrics about code complexity, requirements reviews, and other esoteric development activities.<br />
<br />
Anyone (ALM Vendors take note) who sells a "Quality Management" system based upon managing test cases is lying to themselves and to you. They are doing it wrong. Don't be a victim.<br />
<br />
You can't kludge a development process enough to make a Quality or Test Management system produce meaningful, valid indicators of value to your customers and stakeholders. It just doesn't make sense.<br />
<br />
Approach your development teams from a human relationship perspective. Focus on how people collaborate and work together, and high quality products will emerge as a by-product. Manage the relationships with your customers, and deliver working software frequently to help them see what you can do and how you learn from past experiences. This isn't easy. It's definitely worth it though.<br />
<br />
So. You want to deliver Quality? Test Management isn't the answer. Test Management tells you about a specific task's management. It's about as useful as Marshmallow Management in making Rice Krispies treats. That is, it may have a place in the big picture, but it's certainly not the right way to look at the problem.<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com16tag:blogger.com,1999:blog-9421494.post-53085764484528942682013-02-15T18:49:00.000-05:002013-02-15T18:49:43.066-05:00ShuhariWhen I work with teams to help them learn something new, I try to pay attention to a few things. Firstly, I pay attention to how people are learning, and secondly how I am teaching.<br />
<br />
When I used to teach Physics and Chemistry in high school, one validation of 'success' often came from how the students left the classroom. Generally, teenagers often came <i>into</i> one of those classes the same way (at least at the start of the year): I don't want to be here, this isn't important to me, I'm not going to learn anything useful.<br />
<br />
Okay. Gauntlet down. Let's begin.<br />
<a name='more'></a><br />
I knew I had a good class when students <i>left</i> the room smiling and/or talking about the ideas covered in the lesson. The real learning happens when they talk about the subjects among themselves. We need time to absorb ideas and make them a part of us. Talking about them with others is a good first step in the learning process.<br />
<br />
Putting ideas into practice is an important <i>next</i> step in the learning process. Practice makes the knowledge concrete, more permanent. Through practice we also begin to understand the limits of success under different conditions. In a classroom, practice might happen through assigned questions/exercises, experiments or projects of some sort.<br />
<br />
Once someone groks an idea through practice, we can engage in the next level of discussion -- what next? Applicability, adaptation, extending the ideas, and so on. How can we be more successful? To paraphrase Newton, how can we stand on the shoulders of giants?<br />
<br />
There are many learning models and ideas that apply for both learning and teaching material, and I don't mean to fill this space with them.<br />
<br />
In recent years, my teaching (coaching, consulting) has focussed on Agile and Testing (rather than Physics and Chemistry). When I am approached for help or advice on Agile, Scrum, Exploratory Testing, or something else, I often think about the term <i>Shuhari</i>. (In English, pronounced: <i>shoe - ha - ree</i>)<br />
<br />
Shuhari is a Japanese term from martial arts that describes the learning path to mastery. It roughly translates to "first learn, then detach, and finally transcend." From the Wikipedia page, here's the breakdown:<br />
<ul>
<li>shu -- "protect", "obey" - traditional wisdom - learning fundamentals, techniques, heuristics, proverbs</li>
<li>ha -- "detach", "digress" - breaking with tradition - detachment from the illusions of self</li>
<li>ri -- "leave", "separate" - transcendence - all moves are natural, becoming one with spirit alone without clinging to forms</li>
</ul>
<br />
Shuhari reminds me that rules and rituals are in place for beginners and that we learn to go beyond them as we mature in a particular discipline.<br />
<br />
When I teach people about Scrum or Exploratory Testing, I often see people want to start improvising or adjusting practices right from the beginning. When you do that, you jump to "ha" but without the solid foundation or appreciation of "shu". In a martial arts class, the sensei (instructor) might smack you on the head for doing something like that. (If you're lucky.)<br />
<br />
As we explore new ways of doing things, it's important to start at the beginning and <b><i>practice</i></b> the forms as described. Become comfortable with the practices. Become bored with them. Make them a part of your muscle memory so that you don't have to consciously think about them anymore. Keep practising.<br />
<br />
*Then* one day you may ask "how about if we change <i>this </i>[step] a little? What do you think?" <i>That </i>is an excellent question. A question that drives an experiment. An experiment that drives learning and helps us to enter "ha".<br />
<br />
Timing is the difference. Asking to vary a practice at the beginning doesn't help. Asking <i>after </i>you understand it makes sense.<br />
<br />
There are different concepts and models to describe the paths to mastery. What do you think of this one?<br />
<br />
I have three other models floating around in my head on Mastery and plan to cover them in future posts. I offered to talk about "Mastery" at the Test Coach Camp last year but there wasn't enough interest at the time. I wonder who the target audience is for this topic. People sometimes fall into careers. I actively sought mine, so these models meant something to me at the time. I reflect upon the value that each one offered and look to new insights still waiting for me.<br />
<br />
When I think of Shuhari, I think "Practice before Change." And that reminds me of the old joke: How do you get to Carnegie Hall?<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com2tag:blogger.com,1999:blog-9421494.post-42084700290120806472013-02-01T01:15:00.001-05:002013-02-01T01:16:20.642-05:00The Human Side of LivingAs I go through life I keep noticing stories, ideas and insights into humanity and I sometimes wonder if we are meant to discover these lessons slowly or if there isn't a quicker way to learn them.<br />
<br />
Take for example, in high school we had a really weird Religion teacher who was very Zen or meta or something, and no one got him. I mean he would use examples like "take an extension cord and plug it into itself and there you go." Huh? None of us got it. And then there would be times when he would repeatedly say things like "attack the point not the person" and that was a phrase I understood.<br />
<br />
From him, I learned that sometimes we can meet real jerks that we can learn interesting things from. Learn to separate your feelings about what you hear and understand from the messenger. It's hard sometimes, but you can get good at this.<br />
<br />
<a name='more'></a><br />
Later, I read a story about a samurai warrior. It's short, so here it is:<br />
<blockquote class="tr_bq">
There was once a samurai who wanted to learn the difference between heaven
and hell. He sought until he found a master from whom he thought he could learn.
He stood before the Master and asked him what was the difference between heaven
and hell. The Master took the samurai’s sword and, turning it to the flat of
the blade, struck the samurai on the head. The samurai was surprised at this but
chose to ignore it. He thought that the Master had failed to understand his
question. He once again asked the Master about the difference between heaven and
hell. Again the Master struck the samurai on the head. The samurai staggered
back and puzzled over this. He approached with his question for a third time
and, before he could utter a word, the Master struck him a third time. The
samurai was now so enraged at this behaviour that he grabbed his sword from the
Master, raised it over his head and was prepared to bring it down on the Master’s
head when the Master raised one finger and the samurai paused.<br />
"That is hell," said the Master.<br />
The samurai was instantly so overcome by the courage of this frail old man -
to have risked his life for the sake of a stranger’s question - that he fell
to his knees and bowed before the Master.<br />
"That is heaven," said the Master.</blockquote>
This story keeps popping into my head every now and then. What is good and evil? Is it a matter of perspective? Is it a matter of time? How is it related to compassion? Do we need to judge people/situations, or should we learn to see the good and bad in all things? We can choose what we want to make from a situation. We don't always understand the motives of others, so which stance do you initially take - heaven or hell?<br />
<br />
On the compassion thread, I learned about <a href="http://www.dalailama.com/" target="_blank">HH the (14th) Dalai Lama (of Tibet)</a> when I was in university. It wasn't part of a course, I don't remember what it was. It might have been a movie. He's a really interesting guy and has done some cool things. I pondered his thoughts on compassion and felt that he really has good insights into the human condition so some of those ideas stuck with me. (<a href="https://twitter.com/dalailama" target="_blank">HHDL is on Twitter </a>by the way.)<br />
<br />
When I left school and started working full time, I discovered <a href="http://www.geraldmweinberg.com/" target="_blank">Jerry Weinberg</a>. Jerry published many technical books up to that point, and started the <a href="http://www.ayeconference.com/" target="_blank">Amplifying Your Effectiveness</a> (AYE) conference and Problem-Solving Leadership (PSL) workshop. The workshop and conference are based upon applying the work of Virginia Satir, a family therapist, to the workplace. I find some of the models very insightful.<br />
<br />
Skipping over many other little opportunities and lessons, I find myself thinking about a recently-published book called "<a href="http://thehumansideofagile.com/" target="_blank">The Human Side of Agile</a>" by a colleague Gil Broza. It's a good book. I like it. It sums up a lot of lessons I learned over the years, and includes new ones I didn't know about. The title really sticks with me though.<br />
<br />
When I am at work, I focus on doing things to help others. Help the customers get high quality software of value. Help the team members to learn, grow and become more confident in their abilities. I show patience and temper difficult situations with humour. That's my style. When the going get's tough, I get silly. Sometimes, though, I hear Jerry W's words ringing in my ears "Change your organisation or change your organisation."<br />
<br />
I am an agent of change. I am here to help you establish a new norm, a new status quo, one that is better than you were before. I work with people to help them adapt into their new roles, and I often come across people who neither want my help nor anything to do with change.<br />
<br />
I can understand when people are afraid of uncertainty or the unknown and I am patient enough to work with them to try and build congruence (Satir) and focus on the point not the person (high school teacher). Then there are times when certain people can very intentionally do malicious things to undermine and attack you through a show of power or superiority. I'm too old for this crap.<br />
<br />
From Jerry, I know it is time to change my organisation when this becomes a pattern, because it is my life and I choose how I want to live and enjoy it. I don't want to be miserable at work and then bring that negativity home with me to my family that I love so much.<br />
<br />
It's my life. I want to be happy and helping others makes me happy. I'm weird that way. I understand that not everyone gets that. I'm not here to inflict compassion and other zen mumbo-jumbo on you. I really like and appreciate the Lean and <a href="http://agilemanifesto.org/" target="_blank">Agile values</a>. The focus is on *people* working together to make great things that make your customers happy.<br />
<br />
After 25 years of working in the IT sector, I can tell you that I agree with Jerry when he said that "all problems are people problems." (especially the technical ones.) When I truly came to understand that, I discovered that people are at the heart of the answer to "what is Quality?" After almost 20 years in Testing, I also discovered that test techniques are really models to test the interactions between people working on the projects. This is a bit of unique perspective and I haven't heard anyone else describe it that way, but that's how I see it and teach it.<br />
<br />
People working with people to make other people happy. Lots of other people actually. There's nothing non-human about software development. It's all about the human side of things. And yet. Schools don't teach this. Some people choose to act in inhuman ways. How do you deal with that? Heaven or Hell?<br />
<br />
It's your life, your choice. Change your organisation or change your organisation.<br />
<br />
Thank you to all my teachers, past, present and future. There is still more for me to learn.<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com1tag:blogger.com,1999:blog-9421494.post-23363460248756721882013-01-21T00:19:00.002-05:002013-01-21T00:26:58.147-05:00Sharing thoughtsI've been asked a few times over the past few months why I haven't blogged in a while. Funny that. I have been writing more in this past year than I think I have my whole life. It turns out that the blog posts have fallen off the list of thought-sharing media for a short while. So here's a brief note to let you know why.<br />
<br />
About a year ago, I started keeping a daily journal as a consultant. I got the idea spark from my friend <a href="http://testertested.blogspot.ca/" target="_blank">Pradeep in India</a> when he posted an annual report a few years ago that listed a brief note for each day that year. I looked upon that report with awe and thought that might be a cool way for me to track where some of my time and days go.<br />
<br />
The daily journal is for personal reflection and I don't share it with anyone. It's pretty boring really. Disparate facts and ideas mostly. The challenge for me was in re-developing the habit of writing a little something every day.<br />
<br />
<a name='more'></a>When I was in high school I used to write every day. Again, I never shared those thoughts with anyone. But unlike my current journal, those entries were chock full of entertaining and emotional snippets of teenage life. I suppose if I hadn't lost those journals/books, I might have enough material to create a few teen book series. ;-) As an aside, several of my high school teachers thought I would go into English and continue writing in university. There was more than one surprised look when I said I wanted to go into Science.<br />
<br />
I stopped writing personal journal entries in University and I discovered the internets - Usenet in particular. I started writing socially on topics of interest in the alt.* user forums. This was the early 90's, so the internet was still young then. The idea of Usenet forums wasn't new to me. In fact, I was attracted to them because they reminded me of the BBS forums I used to participate on during the 80's. (Ah, stories for another day.)<br />
<br />
After graduating and joining the workforce full-time, I discovered and joined some email discussion forums in the late 90's. Around 2000, I dived into some Yahoo groups followed by some Google groups a few years later.<br />
<br />
Around that time I started up this Blogger account to keep track of some thoughts as I began to take an active interest in my career in Software Testing. My blog posts were infrequent as I had lost the habit of writing daily. So many things go on in my life and the work-focus of this particular blog has kept me from writing more. Now if it were a general blog on random thoughts, I would likely have written so much more.<br />
<br />
Several years ago I jumped onto Twitter. Interesting medium that.<br />
<br />
I find Twitter is a good medium for me personally and professionally. While I remain on a few email discussion lists, I find I don't participate in them as much as I used to. I do however read and write/share thoughts via Twitter daily.<br />
<br />
I added a gadget/widget thingy to the blogger layout here to show you my last few tweets. Twitter changed their API last year sometime so my tweets stopped appearing here. It's taken me too long to return to this blog site and fix the HTML code to redisplay my tweets on the side panel.<br />
<br />
I spent about an hour tonight searching, scripting and playing with code to make those tweets appear again. I felt it was important to help the casual reader of this public blog understand that while I may share the occasional long thought on this site, I am micro-blogging on Twitter almost daily.<br />
<br />
Several months ago, I also started to capture more thoughts and fieldstones for a book I plan to write on Testing. I haven't made it public yet, and when I do, I will announce it here and on Twitter.<br />
<br />
So, I am currently writing a daily journal, capturing thoughts for a book in progress, tweeting daily, still on a few email discussion lists, participating in half a dozen conferences each year, and still trying to find time to blog every now and then. I have a huge backlog of ideas to share here. I promise to write no less than monthly here in 2013. I will share more ideas than I have in the past.<br />
<br />
I have also been asked if I have an email mailing list. The short answer is no, not at this time. If I add that to the writing list, something else will likely have to come off. I am open to the suggestion though. Maybe later.<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com3tag:blogger.com,1999:blog-9421494.post-27173286659289475982012-05-24T02:01:00.000-04:002012-05-24T02:01:01.071-04:00What is Exploratory Testing?What is Exploratory Testing (ET)? I am asked this every once in a while and I hear a wide range of ideas as to what it is. This is one of those topics where <a href="http://en.wikipedia.org/wiki/Exploratory_testing" target="_blank">Wikipedia</a> doesn't really help much.<br />
<br />
For some, ET is just "good" testing and the reason we say "exploratory" is to distinguish it from bad testing practices. Unfortunately, bad, lazy, haphazard, thoughtless, incomplete, and incompetent testing is quite popular. I won't go into the reasons or supporting evidence for this disgraceful blight on the Software Development industry at this time. Suffice it to say, I don't want to be mixed in with that lot either, so I am happy to describe what I do as something different - something that is far more successful and rewarding when done well.<br />
<br />
Okay, so if ET = [good] testing, what is <i>testing </i>then? According to <a href="http://kaner.com/" target="_blank">Cem Kaner</a>, "software testing is a technical investigation conducted to provide stakeholders with information about the quality of the product or service under test." This definition took me a while to absorb but the more I thought about it the more I found it to be a pretty good definition.<br />
<br />
If you ask <a href="http://qualitytree.com/" target="_blank">Elisabeth Hendrickson</a>, she would say that "a test is an experiment designed to reveal information or answer a specific question about the software or system." See, now I <i>really </i>like this definition! I studied Science in university and I love the way this definition reminds me of the <a href="http://en.wikipedia.org/wiki/Scientific_Method" target="_blank">Scientific Method</a>. The more I learn about testing software, the more I find similarities with doing good Science. (By the way, if you want to learn more about how to do good testing, I highly recommend you read up on the Scientific Method. <b><i>So </i></b>much goodness in there!)<br />
<br />
So, is that all there is to it? Testing = Science, blah blah blah, and we're done? Um, well, no, not really. ET has its own Wikipedia page after all!<br />
<br />
<a name='more'></a><br />
I dislike the first line description of ET on the Wikipedia page. I dislike it because it is incomplete. It says that ET is "concisely described as simultaneous learning, test design and test execution." ... AND?!? And then what?! This definition is kind of missing what happens <b><i>after </i></b>you do the execution part. That's really important.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSfBS7onNZBQ5F5G-104_6qkfb8V1MYw7atl36GoC25zFUOqYJZKhrJn5f3nFCE9-O-MP0aHOPtrYa6VBti9qoYEdZbQeLL_csIYdgH8GDQ-HFnAzbaE6-Hs5QwjI0yHX2NmLS/s1600/ET_cycle.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="153" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSfBS7onNZBQ5F5G-104_6qkfb8V1MYw7atl36GoC25zFUOqYJZKhrJn5f3nFCE9-O-MP0aHOPtrYa6VBti9qoYEdZbQeLL_csIYdgH8GDQ-HFnAzbaE6-Hs5QwjI0yHX2NmLS/s200/ET_cycle.png" width="200" /></a>Elisabeth Hendrickson offers a better description (IMHO): "Exploratory Testing is simultaneously learning about the system while designing and executing tests, using feedback from the last test to inform the next."<br />
<br />
I like this because it closes the loop between the <u>purpose of the test</u> and <u>what you do with the results</u>. In this case, when you learn something from the test you intentionally performed, you use that to decide what you will do next. It is kind of like playing the game of <a href="http://en.wikipedia.org/wiki/20_questions" target="_blank">20 questions</a>. If you play the game poorly, you ask specific questions about what you think the answer is - e.g. "is it a turnip? No. Is it a bicycle? No. Is it a ...?" There's a slight, random chance you <i>may</i> guess it right, but that's really unlikely. If you play the game <i>well</i>, each question you ask helps you narrow down the possibilities until you can make a good guess with a high probability of getting it right.<br />
<br />
I often use this diagram to explain the relationship between <i>exploratory</i> testing and test cases:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikMlX6rWiXNI4wqUXxwVUfod2et4357dgUOLR9Ikqh_qSrHHIU9JBEIi_FqBMYoiK71YHEJdQRC3VO1cGzDWa44Jqxa1OeH_aUSsyFuYmyPtbhZk1ORsIdiBYNHp_TzqHxYVfS/s1600/ET_learning.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikMlX6rWiXNI4wqUXxwVUfod2et4357dgUOLR9Ikqh_qSrHHIU9JBEIi_FqBMYoiK71YHEJdQRC3VO1cGzDWa44Jqxa1OeH_aUSsyFuYmyPtbhZk1ORsIdiBYNHp_TzqHxYVfS/s400/ET_learning.png" width="400" /></a></div>
<br />
When we first look at a new feature or system, we don't know very much. We design experiments (or tests) to help us learn more about it. Initially, it is like the game of 20 questions, where we try many things and look at the system in different ways to try and discover what is important to someone who matters. That is, we explore the system for qualities and risks that we believe the customers, users, or other stakeholders may care about.<br />
<br />
Test cases are different. When you have learned something about the feature (ET session complete), you may choose to document or automate important or representative paths (i.e. test cases) through the software for future reference (e.g. "regression testing"). You don't learn anything new from these test cases, so we sometimes refer to them as <i>scripted</i> "<a href="http://www.developsense.com/blog/2009/08/testing-vs-checking/" target="_blank">checks</a>". We may use and reuse specific test cases for many different purposes - e.g. regression testing, performance testing user profiles, sanity checks, and so on.<br />
<br />
At some point in history, the (1) <i>intent </i>or <i>purpose</i> and (2) <i>test design</i> behind the testing activities were lost and some idiot propagated the idea that test cases are the important part of the testing activity. This "bad practice" has doomed most of the technological world for a few generations now.<br />
<br />
Let me be clear about this: test cases are <b><i>not</i></b> important. To anyone who <i><b>knows</b></i> how to test, we can create and choose new representative paths at any time, and, often times, the <i>variations </i>between the chosen paths through a system helps us uncover new risks and potential problems. Testing requires thinking. Checking, or blindly executing test cases, does not. If executing test cases doesn't require thinking, you may as well program a computer to run them because humans are famously bad at precisely following instructions.<br />
<br />
An important difference between <i>exploratory</i> testing and <i>scripted</i> testing is that scripted testing <a href="http://www.theinvisiblegorilla.com/videos.html" target="_blank">blinds you to everything else going on in the system</a> while exploratory testing aims to help you see more. To use a literary example, author Paulo Coelho posted a short story on "<a href="http://paulocoelhoblog.com/2012/04/09/alchemis/" target="_blank">the secret of happiness</a>" that illustrates this point. (NOTE: please read that story before continuing here - it'll only take a few minutes. I'll wait.)<br />
<br />
I don't know if that is the secret to happiness, but I do know that in the first run through the palace, the young man was so focussed on the task that he missed everything else - this is exactly like <b>scripted testing</b>. The second time, the young man took in everything but forgot about the spoon - this is <b>random </b>or <b>haphazard testing</b>. <span style="color: red;">Many people think this is exploratory testing but it is NOT!</span> Exploratory testing would be how the wise man described the secret of happiness - complete your task <b><i>and </i></b>take in your surroundings.<br />
<br />
It sounds hard, doesn't it? You know what, it IS hard. Good testing requires thoughtful effort and practice. If good testing was as easy as we are often led to believe then we wouldn't have all the software problems we have today, now would we?<br />
<br />
Okay, so if doing good testing, exploratory testing, is hard, who can do it? Good question.<br />
<br />
From one perspective, many people do this kind of testing naturally. BUT WAIT! Many people do this style of testing naturally, the same way many people can solve rate and calculus problems intuitively in their heads whenever you try to catch a ball. The mathematics behind the motion of a ball through the air (gravitational, kinetic and frictional forces) coupled with your movement relative to the ball in order to catch it is really quite complex. Not many people would say they understand or can do the math, but most people can catch the ball. So, part of your brain knows how to do the math even if your brain doesn't tell you how it does it.<br />
<br />
It's the same with testing. There is a method to the madness. When someone goes looking for information, it is usually in response to some question in their head. Either someone asked them the question, or they thought it up based upon some related thought. That question drives you to poke, look, observe and evaluate what you learn to answer that question. That is testing. It has important elements: the question, intentional test design, observations, and analysis of results.<br />
<br />
Some people are good at all of these elements, some are good at some of these elements, and some suck at all of them. To the latter group of individuals I say: please step away from the keyboard, and avoid management roles. Please.<br />
<br />
There is an interesting side note related to <a href="http://agilemanifesto.org/" target="_blank">Agile Software Development</a>. Practitioners and coaches of agile methods may be familiar with the <a href="http://lisacrispin.com/wordpress/2011/11/08/using-the-agile-testing-quadrants/" target="_blank">Agile Testing Quadrants</a>. You will see that "Exploratory Testing" appears in quadrant 3, so what's that all about?<br />
<br />
Funny you should ask. It is a bit misleading.<br />
<br />
You may think that ET in Q3 means that it is something that is <i>only </i>done to critique the product with some business-facing tests. Not so. Exploratory testing will be performed in any and every quadrant as long as the person doing the testing is thinking, intentionally designing their tests, and learning from the results. Last time I checked, that happens in all the quadrants.<br />
<br />
For example, when a programmer is creating unit tests to drive the development (Q1), they are thinking about the feature and design and making choices about what to automate. There is a lot of learning going on in this process and I would very much consider this discovery process as "exploratory". However, when the unit tests are coded and running automatically with every build, these are now "checks" and no more learning is taking place. So, <i>executing</i> these checks that were created in an exploratory way is <i>no longer</i> an exploratory testing activity. Get it?<br />
<br />
Same thing with functional tests (Q2). You start off learning and exploring but once you decide upon and document a specific set of test cases, these test cases are no longer exploratory.<br />
<br />
Quadrant 3 is an interesting place. It is the catch-all space for the million other tests that the system users and stakeholders may be interested in. The problem here is that complete testing is impossible and there is an infinite number of perspectives one may use to examine a particular system. The human brain is uniquely qualified to process a lot of different factors really quickly, integrating and adapting to new information, and eliminating and ignoring aspects that are not a priority to the stakeholders.<br />
<br />
Computers cannot do this. Not even close. That's why the bubble in the corner of the matrix says "Manual" - because our <b><i>brains are </i></b><i><b>the most efficient tools to perform this kind of testing!</b></i> Of course, we make use of tools and automation to help us gather information when appropriate; we just can't let ourselves fall into the trap of thinking that computers can do this for us.<br />
<br />
So, while exploratory testing is a <i>means to an end</i> in the other agile testing quadrants, it is the primary approach in this particular quadrant (Q3). Got it?<br />
<br />
So, if you fumble your way through the other three quadrants on your agile project and you are wondering why your quality still sucks, you may need to take a serious look at finding an awesome tester with some mad exploratory testing skills. Sorry to say that this is not widely taught in schools yet, so we are still something of a rare breed.<br />
<br />
Does this help clear a few things about Exploratory Testing? Please let me know. Cheers!<br />
<br />Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com15tag:blogger.com,1999:blog-9421494.post-33462787147140608772012-02-25T19:07:00.001-05:002012-02-25T19:14:46.475-05:00Testing is a MediumIn a few days I will be giving a presentation to the local Agile/Lean Peer 2 Peer group here in town. The group has a web site - <a href="http://waterlooagilelean.wordpress.com/" target="_blank">Waterloo Agile Lean</a>, and the announcement is also on the <a href="http://events.r20.constantcontact.com/register/event?llr=auyxwfdab&oeidk=a07e5lwd7oq825fe12d&oseq=a012sg47hyrd3" target="_blank">Communitech events</a> page.<br />
<br />
I noticed the posted talk descriptions are shorter than what I wrote. The Waterloo Agile Lean page has this description:<br />
<blockquote class="tr_bq">"This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it’s done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"</blockquote>The Communitech page has this description:<br />
<blockquote class="tr_bq">"Exploratory Testing is the explosive sound check that helps us see things from many directions all at once. It takes skill and practice to do well. The reward is a higher-quality, lower-risk solution that brings teams a richer understanding of the development project.<br />
This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it's done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"</blockquote><a name='more'></a>This is what I submitted:<br />
<blockquote class="tr_bq">"Testing is the medium in which solutions are developed. The value of our delivered solutions depend upon how well we understand and utilize that medium. We can fly straight like an arrow or explode outwards like a spherical sound wave.<br />
Traditional automated TDD "checks" help us fly straight in the direction we choose. Is it the right direction though? How do we know? Are you sure?<br />
Exploratory Testing is the explosive sound check that helps us see things from many directions all at once. It takes skill and practice to do well. The reward is a higher-quality, lower-risk solution that brings teams a richer understanding of the development project.<br />
This session will introduce the basic foundation of Exploratory Testing and run through a live, interactive demo to demonstrate some of how it's done. Bring your open minds and questions and maybe even an app to test. If ET is new to you, prepare to get blown away!"</blockquote><br />
This blog post is *<b>not</b>* about the differences in session descriptions. (In fairness, I really should learn to keep it to one paragraph. I hope to get better at writing session descriptions - it'll come with practice.) Reading the last one first, I can't help think that a bit of context is missing from the two posted descriptions for the final "blown away" statement. That is, that phrase comes from the sound wave analogy and not from some arrogant expectations I have for my presentation abilities. If I had known the descriptions would be shortened, I would have at least changed that sentence.<br />
<br />
This blog post is about the idea that I try to convey in the first sentence that is unfortunately missing from both posted session descriptions: "Testing is the medium in which solutions are developed."<br />
<br />
Software Development is a creative process. It is the intersection of people, skills, tools and experimentation to solve a people-problem with technology. As Jerry Weinberg once said: "all problems are people-problems." Therefore all developed products or services are "people-solutions."<br />
<br />
Perhaps the main thought with "Testing is a medium" is that you cannot (successfully) solve any problem without trying to understand what the problem is in the first place. e.g.: <i>Who is it a problem for? Where? When? How?</i> -- these are all *Testing* questions. When you deliver a solution, you check it with something like: <i>"how does this meet your needs?"</i> -- again, another question.<br />
<br />
Software Development begins and ends with questions. Somewhere in the middle of the creative development process are more testing questions. Lots of different kinds of questions, tests and checks depending on the people, skills and risks involved in developing each particular solution. One could say that Software cannot be developed without Testing.<br />
<br />
Have you tried? Have you ever been on a project where you didn't first ask the customer what the problem was? You didn't check to see if what you are building is working towards that design? Or you didn't ask the customer afterwards if what you delivered meets their needs? How did that work for you?<br />
<br />
If one cannot hope to hit the target without checking many different things, why is Testing often given so little attention or recognition on development projects? Too few individuals try to develop expertise in the Testing field to elevate their contributions to the development effort.<br />
<br />
Anyone may ask a question. That doesn't make you an "expert" in asking questions. My 10-year-old son uses scissors to cut things, but that doesn't mean I want to let him cut my hair! Everyone I meet feels they know what Testing is. Okay, then why do so many projects fail?<br />
<br />
So, what does it take to become really good at Testing? It appears to be somewhat important for the success of most software projects. Maybe it's time to take a good look at Software Development through the medium of Testing. How might you look at things differently then? What skills or knowledge would you want to learn more about? Who do you think should be involved?<br />
<br />
I won't be talking about any of this stuff on Tuesday though. After all, it's not in the session description. =)Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com0tag:blogger.com,1999:blog-9421494.post-78571335570170656942012-01-27T00:24:00.001-05:002012-01-27T00:27:21.433-05:00Quality Agile MetricsI was asked recently what metrics I would collect to assess how well an agile team is improving. I paused for a moment to scan through 12 years of research, discussion, memories and experiences with Metrics on various teams, projects and companies - mostly failed experiments. My answer to the question was to state that I presently only acknowledge one Metric as being meaningful: Customer Satisfaction.<br />
<br />
We discussed the topic further and I elaborated some more on my experiences. Regarding specific "quality" metrics, I explained that things like counting Test Cases and bug fix rates are meaningless. I also referred to the book <i>"Implementing Lean Software Development"</i> by Mary and Tom Poppendieck (which I highly recommend BTW) which warns against "local optimizations" because they will eventually sabotage optimization of the whole system. In other words, if I put a metric in place to try and optimize the Testing function, it doesn't mean the whole [agile] development team's efficiency will improve.<br />
<br />
It needs to be a whole team approach to quality and value. Specific measurements and metrics often lead to gaming of the system and focus on improving the metrics rather than putting the focus on delivering quality and value. If the [whole] team is measured on the customer satisfaction, then that is what they will focus on. I have long since stopped measuring individual performance on a team.<br />
<br />
I haven't stopped thinking about this question though, so I put this question out on Twitter this morning:<br />
<blockquote class="tr_bq">Aside from Customer Satisfaction, are there any other Quality metrics you'd recommend in an #agile environment?</blockquote><br />
<a name='more'></a><br />
Here are the responses I received:<br />
<br />
<ol><li><a href="http://en.wikipedia.org/wiki/Churn_rate" target="_blank">Churn</a> or team turnover. (Real case: Product delivered on time, customer happy, whole team left.)</li>
<li>Escaped defects, inbound support calls/emails</li>
<li>Value created - Reference: "Lean Startup" by Eric Ries (I'm reading it right now)</li>
<li>Number of contributors to each story - not because exact count is meaningful but because it encourages collaboration & review.</li>
<li>Profitability of the project is one. Are the goals (whatever they may be) of the project met?</li>
<li>Lines of code changed during regression. That can expose some severe problems.</li>
<li>Production Defects in 15 days after 'Go Live'.</li>
<li>Cost of rework due to requirement changes.</li>
<li>Code churn as a measure of "quality" (e.g. System Defect Density) - <a href="http://research.microsoft.com/apps/pubs/default.aspx?id=69126" target="_blank">Research</a>, <a href="https://gist.github.com/829932" target="_blank">Code example</a>, <a href="http://stackoverflow.com/questions/54318/any-tools-to-get-code-churn-metrics-for-a-subversion-repository" target="_blank">Discussion</a>, and <a href="http://statcvs.sourceforge.net/statcvs/churn.html" target="_blank">Sample Stats</a>.</li>
</ol><br />
Hm, almost a Top 10 list. I am happy with the responses here and think many of them may be worth exploring further to see what insights they provide. <b><u>Cautionary Note with all Metrics:</u></b> Beware the impact they have on the team. If behaviour or performance starts to change in a negative way, STOP immediately!<br />
<br />
One of the things I talked about in the conversation was the importance of Retrospectives to allow the team to <i><b>own</b></i> their improvement activities. If the team uses these opportunities, their improvements should be observable over several iterations. I think a Happiness Index reading during Retrospectives might be an interesting indicator of the overall effectiveness of the improvement strategies employed.<br />
<br />
What do you think? Anything else I should consider?Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com1tag:blogger.com,1999:blog-9421494.post-3389877440496270942011-12-06T01:34:00.000-05:002011-12-06T01:34:36.133-05:00Testers, Learn about Agile (and Lean)Let me tell you about something called Dramatic Irony. You see it in movies, television shows, plays and in many other places. It happens when you (as the audience or observer) see or understand something that the main characters don't. Often times this is funny, sometimes it's not. Personally, I am one of those that likes to laugh when I see this happen.<br />
<br />
On my learning/education quest over a decade ago, I took many different positions and roles within various IT organisations so that I could learn different aspects of Quality. I went through various phases, and the one I am <i>least </i>proud of was the "Quality champion." This wasn't a job title so much as a belief that (mis-)guided my actions. The role/part/perspective came mainly from believing what my employer(s) told me at the time - namely that "the QA/Test team was responsible for quality."<br />
<br />
If you have worked in Software Development for a while, and perhaps for a larger organisation, you have likely seen someone who believes they are a Quality Champion. They don't want to see <b><i>any</i></b> (known) bugs go out; they check up on everyone in the team to see that they have done their reviews or had someone else inspect their work before passing it onto the next person/team; they join committees to create, document, maintain or present processes that will increase the quality of the delivered products/solutions; and so on.<br />
<br />
Ah, the poor misguided fools. Bless their hearts.<br />
<br />
<a name='more'></a><br />
The first problem is in the company creating a scapegoat culture that puts the responsibility/blame of poor quality on the group of individuals who are least likely to help change the quality late in a development cycle - especially when the (test) team is under-informed, under-funded, under-staffed, under ridiculous time constraints, unappreciated and/or uneducated.<br />
<br />
Quality is everyone's job. And I mean <i>everyone</i>. It starts with the president, moves through every person in the organisation and even includes the customers and users of your product/solution.<br />
<br />
Returning to the naive tester who doesn't know or understand this, they do their part to motivate, inspire, nudge and, to some extent, <i>manage</i> the individuals affecting the quality of the released products. The effect of this is easy to predict. The nail that sticks out gets hammered down.<br />
<br />
As it happened to me, I have seen it happen to other testers - they become disheartened, give up and withdraw back into their own work routine, ignoring everyone else and just focussing on their part.<br />
<br />
I didn't give up entirely though - I can be persistent. I kept my eyes and ears open and looked in different places for ideas to help me understand how the organisation/system and "quality" fit together.<br />
<br />
At the start of the 21st century, I stumbled upon something called the <a href="http://agilemanifesto.org/">Agile Manifesto</a>. I can't say that I completely understood it at the time but I was certainly excited about it. I mentioned it to my manager and he said it was a passing fad and that in 5 years no one would ever remember it. I <i>felt</i> that he was wrong and trusted my instincts on this one.<br />
<br />
Over the next several years I learned about Agile and the different implementations. It all seemed very programmer/developer-centric to me as none of the models, articles, books or people ever seemed to talk about the testers. There was certainly a lot about devs taking responsibility for the quality of their work and incorporating testing practices into their regular routines. Hey, I'm all over that! Rah, rah, rah, sis-boom-bah, yaaaay Agile! :-D<br />
<br />
Now, a decade later, I understand the Agile movement in a deeper, richer way. It is part of how I think and solve problems. It is part of how I encourage people to work together and to focus on the things that matter when delivering value to the customers. Just as I once described myself as someone who eats, sleeps and breathes 'Testing', I believe I would say the same thing about 'Agile'.<br />
<br />
Here's the kick: the Agile movement is an ENTIRE COMMUNITY OF QUALITY CHAMPIONS! (oops, the caps lock got stuck for a moment there.)<br />
<br />
That's right, listen up all of you testers out there who think you are alone and that no one is listening to your cries of "there must be a better way." There are people out there - Agile Coaches and Consultants - who are working to do just that. They find, create and teach better ways for development teams to work together to raise the quality/value of the delivered solutions.<br />
<br />
If you don't know anything about Agile, start now. Read up on it, talk to others, attend a course, find online webinars, go to a conference - anything! Just get out there are start learning about Agile now! The same applies for Lean Software Development - learn about it.<br />
<br />
Please keep in mind that there is a big difference between <i>going through the motions</i> of agile practices and actually <i>being/thinking</i> "agile". The agile <i>mindset </i>is more important to me than any particular set of practices.<br />
<br />
This is especially important to keep in mind if someone tells you that testers have no place in Agile teams. Those people are what I like to call "wrong." (Get off my lawn.)<br />
<br />
Testers can bring valuable insights to the agile software development process if the team works together and embraces the strengths of each team member. You, dear testers, must be open to change and adapt your role to work in new ways.<br />
<br />
<b>Warning, Warning, Danger, Danger:</b> if someone tells you that you are "doing agile" and you (as a tester) should keep updating your manual regression test cases and test plans, please tell them to kindly "get off my lawn" for me. Thanks.<br />
<br />
I still see this happen. I go into a client's office and look at how the software development team members are working. I see the testers off on their own, testing things in isolation and complaining about how no one seems to care about Quality because they only see and test the product at the end, just before it goes out the door.<br />
<br />
Ooo, irony. I see something happening around you (within the community, industry, and sometimes within your own company) that you don't see.<br />
<br />
Testers: if you feel like you are in this "Quality champion" role, be the hero and talk to an Agile Coach. Get one to come in and do an assessment. You aren't alone. You can help make a difference. Ask for the right help.<br />
<br />
Be prepared to change yourself - to learn, to adapt to new ways of working with others, and to deliver a whole new level of quality and value that you didn't think was possible. Be a model team member and let the Agile coach help guide the rest of the team. Quality isn't your job - it's everyone's.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com0tag:blogger.com,1999:blog-9421494.post-46380167713624827422011-10-18T01:52:00.000-04:002011-10-18T01:52:31.553-04:00The Future(s) of Software TestingThis topic keeps coming up in various discussions so here are some thoughts on what I think a future may hold for us one day. What does it mean to predict the future? Does it mean it is inevitable? If it is something appealing, maybe it is something we can work towards.<br />
<br />
What does it mean to talk about the Future of Software Testing without talking about "Quality" (whatever that is)? I believe that Testing is a means to helping you attain the desired quality but that on its own it is not an indicator of what the delivered quality will be. I think it is fair to speculate on the practice of Testing in isolation of the idea of Quality. Just like it is fair to speculate on the kind of vehicles you use for travel without saying it is a clear indicator of the quality of the journey.<br />
<br />
When it comes to predicting the future, how far ahead do we look? It used to bother me that H.G. Wells chose to go 800,000 years in the future in his famous work, <a href="http://en.wikipedia.org/wiki/The_Time_Machine">The Time Machine</a>. Some authors go 100 years or even just 5 or 10 to tell their tale. I will <i>not</i> be writing about what will happen in 5 years time and I really hope it doesn't take 100 years. I don't know how long some of these events will take to manifest. I do have some idea of 'markers' that we may see along the way though.<br />
<br />
When it comes to Software Testing, two thoughts jump immediately to mind: 1) Software Testing is an inseparable part of the creative Software/Solution Development process; and 2) <u>there will be many different possible futures depending on how you do testing today</u>. Put another way, there is no one <i>right way</i> to solve a problem, and creating software is a complex problem-solving activity performed in shifting organisations of human interaction so there are many ways to do it. In my opinion, the technical 'fiddly bits' like code, tests and user documentation <i><span class="Apple-style-span" style="color: red;">pale in comparison to the difficulty of those two primary challenges: (1) solving the real problem, and (2) working with others to get the job done.</span></i><br />
<br />
When we ask what is the future of software testing, we are really asking what is the future of software development. So what will Testing look like in the future? Well, what does Testing look like today? What does Development look like today?<br />
<br />
<a name='more'></a><br />
<b><u>Scenario 1: No (internal formal) Testing</u></b><br />
For some, testing is only done by the end users. This may be a single individual, a small group of people or a larger population depending on the application. For example, if you have a highly-specialised application, the best testers may be the domain experts themselves (e.g. researchers, scientists, and so on). Aside from some quick checks made during development, no formal testing is performed prior to release of the software to the end user(s) for evaluation.<br />
<br />
In some cases, non-critical software applications that people can live without if they fail are sometimes also released without any special internal testing phases. I have found that this often happens when a company is in a new niche market and they have no competitors yet. These applications may be prototypes or ways to try to figure out user/market value. If the apps stop working, people can continue on without them.<br />
<br />
<i><span class="Apple-style-span" style="color: blue;">I don't see the future of this kind of software development changing much.</span> </i>No (internal) Testers are affected in the present or future of this kind of development shop. I believe with the advances in development technologies, we will see the quality of deliverables improve over time although the activity of determining suitability or fitness for purpose will always remain. That is, specialised software will always require the approval of the expert customers. They are the testers here.<br />
<br />
<b><u>Scenario 2: Programmer-driven-testing</u></b><br />
I believe that "testers" are "developers" by definition, so rather than saying "developer-testing" (which would be confusing) I will say the code jockeys are the ones who own and perform the testing here. This is different from the scenario above in that formal test frameworks are in place and testing happens prior to release to the customer or users. There are no separate testers or test teams within the company to take a second look at the system to make sure it "looks right" (big air quotes here). Many Agile Development shops operate this way, especially smaller ones.<br />
<br />
<span class="Apple-style-span" style="color: blue;"><i>Again, as there are no separate testers to speak of here, the future here won't seem very surprising. I expect things will pretty much look the same as they do today - only the tools will become more sophisticated.</i></span> (more about that below, in the next scenario)<br />
<br />
<b><u>Scenario 3: Functional System Testing</u></b><br />
Unfortunately, this makes up a large part of the software development companies out there with QA/Test teams - the "traditional" software testing. It depends on the context of course, however my experience has been that this activity is largely a waste of time and money and is performed more for show and to satisfy lawyers (i.e. copious test paperwork as "evidence" of due diligence) than it is to actually raise software quality.<br />
<br />
Anyone today whose job is to create mountainous test documentation that slows the creative development process, creates division and mistrust between project team members, and only serves to check that what has already been built has been "built as specified" is completely wasting everyone's time and money. I believe the term here is WOMBAT - waste of money, brains and time.<br />
<br />
<span class="Apple-style-span" style="color: blue;"><i>The future here is easy to predict. This tester role will completely disappear in the future as it is "wasteful" (in "Lean" development terms) and provides no additional value to the development process or solution quality.</i></span> Sorry kiddos - adapt or die.<br />
<br />
<span class="Apple-style-span" style="color: blue;"><i>What will trigger this future? Simple. Responsible development practices and smarter development tools. </i></span>When you take a look at the value that these <i>test cases</i> (i.e. usually very narrow-focussed functional "checks") provide, they are generally standard, straightforward things like:<br />
<br />
1. Is the feature element (button, field, widget, text, whatever) that you said is supposed to be there actually there?<br />
<ul><li>Test-Driven Development (TDD) is a development practice (available today!) that completely eliminates the need to have separate testers do this kind of thing manually. There is simply no good reason for this testing activity to continue in a <i>manual </i>fashion into the future. It shouldn't even be done manually <b><i>today</i></b>!</li>
<li>(TDD-style) Automated functional "checks" directly linked to the code are way more efficient to maintain. They also facilitate good quality deliverables that don't degrade unexpectedly over time.</li>
</ul>2. Do input fields allow unexpected inputs?<br />
<ul><li><span class="Apple-style-span" style="color: blue;"><i>I believe that (future) advanced development tools (programming languages, compilers, etc.) will automatically include model-based testing (MBT) subroutines that will automatically scan for such trivial aspects of developed code.</i></span></li>
<li>There is really nothing magical in this kind of testing activity. Yes, it finds a lot of really good bugs. The only perceived "magic" here comes from inexperienced programmers who don't know how to develop better solutions.</li>
</ul><br />
Generally speaking, I don't know when programmers stopped taking responsibility for such basic testing and checking activities as part of their coding tasks. If you go back to the 1960's and 70's, there were no separate testers - programmers did it all. This is easy stuff. I really believe that if programmers had continued to "own" this kind of testing, it would have been part of the development tools by now.<br />
<br />
We took one step forward in the 60's and 70's and then two steps backwards in the 90's and 2000's. It's no wonder that the "agile movement" is trying to move programmers back in the right direction. When this quality/value ownership in development becomes more widespread, I will be happy to report that we will have achieved 1960's-level of development craftsmanship. Again. sigh.<br />
<br />
<b><u>Scenario 4: Business Analysts, Personas and Suitability Testing</u></b><br />
Testing in some companies happens with BA's or other specialists who act on the customer's behalf to check that the application or system developed (SUT) meets the expected <i>fitness for purpose</i>. That is, does the SUT meet the business needs (rules, SOP's, statutes, industry standards, and so on) of the customers or users? These types of internal testers generally have some kind of industry or domain experience or knowledge.<br />
<br />
In testing jargon, this is more "validation" kind of testing or "did we build the right thing?" Sometimes this is done with the help of "typical user profiles" called "<a href="http://www.software-testing.com.au/blog/2006/07/30/personas-substruction-and-other-trades-tricks/">personas</a>." In the absence of formal requirements or tests, one can ask the question "what would user X do in this situation?" This kind of testing has more of a basis in business and psychology and doesn't <i>presently </i>lend itself to automation very well.<br />
<br />
<i><span class="Apple-style-span" style="color: blue;">I believe that in the future, these kinds of tests will be automated as well, using intelligent systems and algorithms that can calculate the percentage probability of a developed solution falling within the desired user parameters.</span></i><br />
<br />
I believe that the <a href="http://creativemachines.cornell.edu/eureqa">Eureqa</a> system provides us with a glimpse of what intelligent computer systems can do today. With advancements in hardware technology, I believe it will be practical for humans one day to interact with computers via natural-language voice control and have the computers do these kinds of checks for us. The "comparing system" or "oracle" will group business rules together into a model, run through the SUT and heuristically compare the data output with the desired business model. At this point it is simple mathematics to let you know that the SUT meets approximately 72% of the desired model, include confidence limits, and tell you which rules the SUT fails to meet.<br />
<br />
We're not there yet. If this describes your job, I'm pretty sure your job is safe for the next 5-10 years (if you are good at what you do). Development of this kind of "oracle" really depends on a lot of things, including how quickly we get through scenario 3 above.<br />
<br />
<b><u>Scenario 5: Life-Critical Systems</u></b><br />
This is more than simply a variation of scenario 4 above. These kinds of systems are things like medical devices, nuclear/energy management systems, aerospace and deep-sea technologies, and so on. Basically, any system that if it fails someone will very likely die.<br />
<br />
<span class="Apple-style-span" style="color: blue;"><i>When people's lives are on the line, I believe the responsible action will be to always have people involved in the evaluation and assessment of the SUT.</i></span><br />
<br />
Yes, I believe that advanced development tools and technologies such as what I mentioned in scenarios 3 & 4 will greatly improve the foundational quality of all systems developed - including life-critical systems. However, the role of the <i>responsible development organisation</i> here will be to have good, smart, skilled people own the field trials in a way that determines "fitness for use" at a level well beyond what I have already described above.<br />
<br />
If my word processor is unavailable, I may get annoyed but I will find another way to write a message. If a pacemaker stops running after two weeks of use, you can be sure that the patient involved cares about this problem in a really big way. As will his or her family.<br />
<br />
<b><u>Scenario 6: Black-Box System Testing, Para-Functional Testing, Exploratory Testing</u></b><br />
This is a superset of scenarios 3 & 4 and complementary to scenario 5. In this role, the tester takes a look at more than just the functionality of the system and is trained to ask questions about the SUT and user expectations that sometimes challenges the current design of the developed solution.<br />
<br />
A good Exploratory Tester identifies assumptions and asks open questions about them. For example, questions about the user experience, flow of data, security and privacy, internationalisation, reliability and many other facets that are often skipped or ignored in "traditional" software test teams.<br />
<br />
I do this style of testing, and I know many good people who also do it around the world. Unfortunately, I also know that we represent a small percentage of all the test teams out there.<br />
<br />
I believe that this testing role fills an important niche in the software development "creative problem-solving" activity that is currently lacking in many companies. That is, we go back to the important question: <i style="color: red;">are we solving the right problem for everyone who matters?</i><br />
<br />
<i><span class="Apple-style-span" style="color: blue;">I see two possible futures here. First, if people are still involved in the development process, I believe that this testing role will be split in two parts. The hands-on testing part will be automated using the advanced development tools I already described above. We can simply add new MBT subroutines to the tools to account for new personas, perspectives and potential problem types. The creative, investigative people-interaction part will still need a skilled person to go around, talk to people and ask the right questions. This will lead to creating the right tests for the tools to help us answer.</span></i><br />
<br />
<i><span class="Apple-style-span" style="color: blue;">In the second possible future, if people are <b>not</b> involved in the software development process, this whole testing activity will be automated in a fashion similar to what I described in scenario 4 above.</span></i><br />
<br />
<b><u>Scenario 7: Specialised tests - e.g. Performance, Usability</u></b><br />
This is a variation of scenario 6 above. A specialised test is one that answers a specific class of questions. My experience in Performance Testing tells me that how I <i>do</i> this kind of testing is different from other kinds of testing. At the end of the day, there are a set of rules and models that apply to do this kind of testing properly. We will be able to program or teach these rules to an "oracle" system at some point in the future.<br />
<br />
<span class="Apple-style-span" style="color: blue;"><i>Human research and expertise will go into the problem-solving models that we will program into these computer systems. Depending on the complexity of the development project, the oracle system may be able to ask the appropriate questions and execute the tests without further prompting. In some cases, I expect the initial questions will come from a person and the computer system will be able to perform the test and report the results as required.</i></span><br />
<br />
<br />
To sum up, I see the future of software development as being <b>much</b> more plug-and-play than anything we have today. Software testing activities will be largely automated with the possibility that humans may still be involved in asking important questions that lead to suitability or fitness for purpose. In time, I think that learning computer systems will be able to anticipate those kinds of questions and they will free us up to do different, more interesting creative work.<br />
<br />
Don't worry, testers. Your jobs are safe as long as the programmers' jobs are safe - provided you are contributing value to the development activities. I believe our "developer" roles will disappear in close tandem. Software development and engineering is still a relatively young field/industry. We still have a lot of growing up to do.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com8tag:blogger.com,1999:blog-9421494.post-55469301813775534552011-08-23T01:29:00.000-04:002011-08-23T01:29:42.147-04:00Hobbies and InterestsSeveral years ago I wrote an article summarising some of the key points I keep in mind while interviewing candidates for a test team. The article is called "<a href="http://staqs.com/pubs/Hiring_Testers_PC2007.pdf">Hiring Software Testers in an Information Age</a>" and is available as a PDF on my main site. The article was originally targetted to recruiters who kept asking me for advice on hiring software testers and they would always be surprised at the level of detail that I went through in describing what it takes to hire a good person for a testing position.<br />
<br />
Conversations with recruiters over coffee would always start the same. I would say something like: if you are just trying to find a warm body to fill a position, then you don't need to hear what I have to say. If you want to hire someone who thinks and has a good chance of fitting in with the culture of the team and organisation to provide value, then it is a complex problem that requires insights into what the position actually involves.<br />
<br />
There are about a dozen different checkpoints that I go through when considering and interviewing candidates, and the paper I wrote touched on some of the major points but not all of them. Actually, I even removed some of them from the article as early drafts had too much information. My intention was to get some of the important points across without writing a book.<br />
<br />
Recently, a colleague and friend, Michael Mahlberg tweeted the following:<br />
<blockquote>RT <a class=" twitter-atreply" data-screen-name="NolanBushnell" href="http://twitter.com/#!/NolanBushnell/status/98789741268447232" rel="nofollow"><span class="at">@</span><span class="at-text">NolanBushnell</span></a>: At Atari we hired based on hobbies and not grades in school. We ended up with the best engineering group in the world.</blockquote>I liked that comment and followed up with a supporting tweet:<br />
<blockquote>On Hiring: if a résumé or cover letter doesn't describe Hobbies or other Interests, I usually skip it.</blockquote>This sparked some conversation on twitter and I want to elaborate on my comment here.<br />
<br />
<a name='more'></a><br />
First off, it is important to know that I place myself in the <a href="http://www.context-driven-testing.com/">Context-Driven School of Testing</a>. The principles of this 'school' puts the focus on the <i>people </i>working together to help deliver the right solution. One of the things I have noticed over the last 22 years in the software industry is that the best testers are the ones who care about testing. They have fun with it. They believe they are working to improve things for the customer. They have a drive and motivation that lets them ignore or put up with a lot of crap that is dumped on them by unhealthy organisations and ignorant individuals.<br />
<br />
Unfortunately, that kind of motivation, passion or drive doesn't come across on a standard résumé. I can spot it straight away when I am talking with someone, but how do you communicate it in a standard functional résumé? In general, I don't see it in the "technical skills" or job description sections that focus on accomplishments and other task-oriented details. The majority of the time I notice evidence or hints of passion and motivation in the cover letter, if anywhere at all.<br />
<br />
So, when I am hiring for a test team, a team that I want to integrate well with the rest of development and the organisation, a team that I want to focus on building human relationships as well as exercise systems and scientific thinking in their quality investigations, a team that I want to encourage fun and respect for their hard work in providing valuable information to help make timely decisions, where do I start when I am looking at a stack of résumés and job applications?<br />
<br />
People who submit cover letters go to the top of the pile. People who include "Hobbies and Interests" in their résumés are next. People who don't submit a cover letter and don't tell me anything other than a bunch of dry technical information and job details are put at the bottom of the list and often fall right off the pile.<br />
<br />
Having a cover letter or a "Hobbies and Interests" section doesn't guarantee an interview but your chances of having me give you a quick call are higher. So, what's the deal with this "Hobbies and Interests" section anyway?<br />
<br />
<a href="http://www.jrothman.com/">Johanna Rothman</a> wrote an awesome book called "Hiring the Best Knowledge Workers, Techies & Nerds." If you don't have a copy yet, and you are in the business of hiring technical people, I highly recommend you get a copy of this book. Johanna is an awesome person and a terrific writer. She writes a few blogs and I learn a lot from them.<br />
<br />
Johanna wrote a good blog post back in 2004 titled "<a href="http://www.jrothman.com/blog/htp/2004/01/tips-for-reviewing-resumes.html">Tips for Reviewing Resumes</a>". One of the points she wrote in that post says: "<i>Hobbies or other personal information. This stuff isn’t relevant to the job and should not be part of how you select candidates.</i>"<br />
<br />
I partly agree with this sentence. I really don't care about personal information in an application. By "personal information" I mean things like marital status, age, sex, or anything else that the government might use to classify you in their census demographics charts.<br />
<br />
What about hobbies and other interests? Things like: playing music (e.g. piano or guitar), likes to read, play video games, do magic, go cycling, martial arts, improv, and so on.<br />
<br />
This stuff I find both interesting <b><i>and</i></b> relevant! Why? Creativity, passion and (skill) transference.<br />
<br />
I am looking to hire <i>people</i> -- intelligent, thinking, caring, fun people and not drones. I honestly don't know whether the stuff I'm reading on your application is true, embellished or fabricated. I want to get an idea of the <i>whole </i>picture of who you are. I <i>will</i> test you for your technical ability during the interview, so I'm not worried about that part. Finding someone who has the right mindset and will fit in with the rest of the team is harder to grasp from technical details and job accomplishments alone.<br />
<br />
Based on how I currently understand, apply and teach Software Testing, I like to find candidates who exercise both their creative and analytical parts of their minds -- i.e. people who are right and left-brained (to use a dated, flawed model of the human mind). When I read through the candidate's hobbies and interests, I build a list of <i>assumptions</i> that I can check in a phone call or in-person interview.<br />
<br />
<b><i>Assumptions</i></b> might be things like:<br />
<br />
<ul><li>if I see someone who likes to do creative things like play music, sing, knit, et cetera, these are right-brained/creative activities. This is a good sign that the candidate might be good at Exploratory Testing as I do it.</li>
<li>if I see someone who participates in theatre, improv or role-playing games, then they may be good at user profiling and test design.</li>
<li>if someone likes to read, this may tell me that they are learners and continue to feed their imaginations. This is also a good sign for Exploratory Testing, and I am happy to ask them about the kinds of books they read.</li>
<li>sometimes their interests may align very well with the interests of the organisation and the product or solution in development. Things like playing video games or participating in sports are <i>really</i> good hobbies to have if the hiring company makes games or develops solutions for the Sports industry. </li>
</ul><br />
There are too many examples for me to list here. The point is that sometimes the Hobbies and Interests section can provide me with insights that I won't get from the Work Experience, Technical Skills or Education sections of a résumé alone. I look for evidence of creativity, motivation, passion, balance, feeding imagination, professional activities, community involvement, and so on. Sometimes, the skills they develop in their hobbies are directly transferable to the testing tasks required -- e.g. learning, observation, relationship-building, organisation, analysis, design, problem-solving, and so on. Sometimes they aren't - it depends on the individual.<br />
<br />
These are all <i>assumptions</i>, I know, so I treat them like assumptions. I make note of them and ask about them in phone calls and interviews.<br />
<br />
If I like the candidate and hire them, I use their hobbies/interests as examples when coaching them on testing theory, models and practices. Since the hobby is familiar to them, I find this is a really powerful teaching technique. I also encourage them to come up with their own testing analogies using knowledge and experiences they are familiar with. It raises their self-confidence and helps them to remember abstract ideas in their own terms.<br />
<br />
The likelihood of me finding a Testing Superstar by skimming through résumés is pretty slim. The likelihood of me finding a candidate who I can help <i>become </i>a Testing Superstar is much higher if I can learn more about <i>them</i> in their job application; learn more about what makes them a unique, creative, passionate individual who cares about others, doing excellent work, providing value and enjoying life.<br />
<br />
But that's just me. All interviewers are different. Your mileage may vary.<br />
<br />
When in doubt, I suggest candidates be themselves. If you are penalised in a hiring process for it, you don't want to work there anyway. You are allowed to have a life. Work is just a part of it.<br />
<br />
Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com3tag:blogger.com,1999:blog-9421494.post-17715313424855490912011-07-28T02:56:00.001-04:002011-07-28T03:01:45.365-04:00Quality Center Must DieIt is not a matter of "if" -- it is a matter of "when" HP's Quality Center software will die. And you, my dear readers will help make that happen.<br />
<br />
"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.<br />
<br />
But I am getting ahead of myself. Let's rewind the story a bit...<br />
<br />
Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.<br />
<br />
After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow. It's a bit of a slower pace, but the results are more positive and long-lasting.<br />
<br />
I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is <i><b>not</b></i> an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.<br />
<br />
<a name='more'></a><br />
Patient Zero and I are part of a larger test team, and the rest of the test team all work on Waterfall-style projects, use Waterfall-compatible tools, and they generally don't get how we work. :) Unfortunately, one of the tools mandated for our team's use is HP's Quality Center (HPQC). I hadn't seen that tool in about a decade and it looked very similar to how I last remember it.<br />
<br />
To my agile coach/practitioner friends I should clarify that at <i><b>no </b></i>time <i>during</i> our sprint development work does anyone ever touch HPQC! However, once the code is deployed/falls into the Waterfall Release process, regression test cases are created in HPQC and it is used for defect tracking. It is mandated, and so shall it be done. I can live with that. It's just a tool at this point and the impact to our ability to deliver a good solution is eliminated by the fact that we don't touch it until after we are "done". (Communication and collaboration FTW!)<br />
<br />
Two days ago.<br />
<br />
Our whole test team took part in a 2-day HPQC training workshop on something HP calls "Business Process Testing" or BPT. Being naturally curious to learn something new, I wanted to know what BPT was and how it fits into the bigger testing picture. Here we go.<br />
<br />
<div>We were given a handout with some "test scenarios" to be used for training. The test scenarios fell into this pattern:</div><div><ul><li>Scenario name/title</li>
<li>Requirement description</li>
<li>Test Situation (I am staring at this right now and I still don't know what this means)</li>
<li>Role (kind of system user this requirement/situation applies to)</li>
<li>Steps</li>
</ul></div><div><br />
</div><div>That's okay information. The "Steps" are what you might typically expect to see if you have been testing for a while. Here is an example for working with a sample web app:</div><div><ol><li>Login to the system with (a certain type of user)</li>
<li>Navigate to some module in the app</li>
<li>Click "Create" button from the tool-bar</li>
<li>Enter mandatory field values and save</li>
<li>Search for the information you created in the previous step</li>
<li>Logout of the system</li>
</ol><div><br />
</div><div>And there were 6 of these scenarios.</div></div><div><br />
</div>I read through these tests and then I tried to follow them using the system. I quickly encountered a half-dozen bugs - some with the system, some with the test scenarios/cases, and some were open questions that I would follow-up with the Product Owner for requirement clarification.<br />
<br />
But, woah-woah-woah-hey.. wait a minute. We only need to worry about *these* documented test scenarios! I struggled hard to keep my mouth shut about the value of time and the many different kinds of tests I would happily engage in at this point if I could leave the tool alone. But, I left that to my "inner voice" and we were now one hour into the first day's training.<br />
<br />
At this point, we were given an overview of the HPQC modules and told to (1) enter the Requirements, (2) create BPT test scenarios, and (3) "componentize" the test scenarios into groupings of related steps. This last part required some explanation since this was new to me, and once we got it, we set to work on the task. Since we could see (1) and (2) on the handout sheets in front of us, we went straight to work on part (3). That was kind of fun working in small group of 4 looking at these scenarios and trying to come up with solutions.<br />
<br />
And then someone went and spoiled the fun. We were "told" that we weren't supposed to do part (3) until <b><i>after</i></b> we had entered the information for (1) and (2) into the tool. I was all like "really?" and then a team lead came by and repeated the exact same thing. This may be summed up as: Enter data into the tool first, and think later.<br />
<br />
I was kind of shocked with this comment and attitude and in retrospect it kind of foreshadowed the rest of the training experience -- i.e. Tool and process first; think later; maybe. Okay. I'll play along and see where this goes.<br />
<br />
During this exercise, I began to grasp this idea of "components" as HP uses them. I think they are like Page Objects - chunks of code (or in this case, test steps) that perform a certain function, promote reuse, and reduce duplication. Although it was never described in this way, I believe the HP "BPT" module is a proprietary Do-It-Yourself <a href="http://en.wikipedia.org/wiki/Domain-specific_language">DSL</a>. Aha! I have experience with those. I get that stuff.<br />
<br />
So, once I got it, I started to explain the concept of <a href="http://en.wikipedia.org/wiki/YAGNI">YAGNI</a> to my group team members. That is, let's not overthink or over-engineer these components. Let's build/write them based on the needs of the requirements in front of us. We will modify the components in the future as new requirements appear. This idea was well received and we quickly came to an efficient solution for our small group.<br />
<br />
When we took up the exercise, I found I had to explain the YAGNI concept to the instructor/trainer (and rest of the class/team) as he proposed that we try to abstract out these components to allow for compatibility with other features and system elements. What a waste of time! We cannot know what we will need beyond our immediate needs so that kind of abstraction is a pointless exercise that leads to more headaches than you need.<br />
<br />
Eventually, I started to get that the point of writing the test scenarios using BPT was that these components form a basic vocabulary that may then be automated at some point in the future -- yes, BPT integrates with HP's QTP automation tools. Now, I'm all in favour of consistency, clarity, reuse, and automating tests that humans should never have to do more than once (and if there is value in re-running the test), so I struggled to understand why it was never explained to us this way. As long as I kept the DSL/automation model in my head, I understood what we were doing and can see the potential benefits of it.<br />
<br />
I saw many testers struggle with the exercises and models we were presented with. End of day one.<br />
<br />
Day two began with a quick recap and then we were introduced to the bureaucracy that is Waterfall and HPQC. That is, there are review processes and workflows to cover each requirement, BPT and component. <b>Welcome to Wasteland.</b> (I mean the Lean Development concept of "waste" here, although other interpretations of "wasteland" may be just as valid.) Easily a third of the 2-day training was spent on the processes surrounding the management and review of the various HPQC objects. sigh.<br />
<br />
We then moved onto the concept of "parameters" for the components we created yesterday. Okay, I get method parameters when I am scripting with Ruby, so this was no sweat. Given the length of time I spent parametrizing my components compared to everyone else in the class, I think I may be one of the few who really got it.<br />
<br />
I learned a few more things about the main instructor. One was that he didn't know how to identify web page elements using commonly available browser tools. Umm, aren't these tools supposed to interact with web pages? This never came up before? You have never wondered how to find the name/id for an element on a web page? really?<br />
<br />
The other was that he had a really bad sense of humour. He made a reference to a "QA" joke that I shall not repeat here. Needless to say that I found it insulting, offensive, it made my skin crawl and my blood boil. Many unpleasant feelings and ideas arose in me and it took all my willpower and strength not to react to the blatant stupidity of insulting the profession of the students in your class and the market that the HPQC tool represents.<br />
<br />
The final blow for the day came to me when we tried to "execute" these BPT test scenarios using the HPQC tool. THEN I discover that these "parameters" can have 2 kinds of values - fixed/hard-coded and run-time. Anyone who has done intelligent automation knows NOT to hard-code values in their scripts. Data-driven is way better, and as an exploratory tester, I may not know what value I will choose until moments before.<br />
<br />
Here's where the HPQC tool gets stupid..er. Regardless which parameter type you choose, they may ONLY be defined BEFORE the test execution! You cannot leave empty parameters so that when you get to a particular step, the tester can decide what value to input and feed it back into the tool.<br />
<br />
What does this mean? As a tester, let's say you want to create a new user profile in an app of some kind. The test parameters for things like Name, Email, Address, Country, and so on, must all be defined and set <b><i>before</i></b> the test is executed. You cannot decide <i>while you are actually testing</i>!<br />
<br />
Why does this matter to me? It matters because it means that the humans who execute these BPT tests manually have no choice in what values they may input. Testing techniques like Equivalence Classes and BVA that help guide our choices to pursue interesting paths are completely cut-off! It turns out that HPQC treats humans <b><i>worse</i></b> than the automated counterparts. In discussion with an automation "expert" at the end of the class I learned that at least you can code some variability into the QTP automation scripts. This is not possible with the same test scenarios executed manually by human beings.<br />
<br />
So. Much. Wrong.<br />
<br />
So after my first ever QC training session, here are some of my take-aways:<br />
<br />
<ul><li>It took me 2 days to "script" 6 test scenarios in this tool and they were rotten test cases to begin with! I suspect that outside of the training environment, it will actually take longer to complete since you won't have a "reviewer" sitting next to you waiting for you to finish your piece.</li>
<li>And they weren't even automated! Who knows how long it would take to tweak the "components" to make them work with a particular automation strategy.</li>
<li>HP QC will never be a useful tool for any agile or rapid development efforts</li>
<li>(HP best practice) Put the tool, data and review processes first, before you think. Maybe always instead of it too.</li>
<li>BPT is a DSL framework for test scripting</li>
<li>BPT component parameters cannot be customised <i>during</i> test execution. They may only be set/defined <i>before</i> you start testing. => No thinking allowed while testing.</li>
<li>The instructor didn't appear to be knowledgeable on anything outside of the tool itself. This includes how we might actually want to use the tool. No, no, no. And I quote: "Testers must change how they work to use the tool in the way it was designed."</li>
<li>As long as there are people pushing these kinds of horrible tools that suck the life and intelligence out of people, inject mountains of wasteful activities that provide no value to the customers or end users, and continue to create barriers between testers and their developer counterparts, I will always have job security in helping organisations recover from these cancers.</li>
</ul><br />
At the end of the day, there were several people interested in my idea of randomising tests. The automation expert in the class insisted that it couldn't be done with automation, so I called up my "<a href="http://staqs.com/pubs/Unscripted_Automation_PC2009.pdf">Unscripted Automation</a>" presentation slides that I gave a few years ago. He said that what I proposed was not a "best practice" and that everyone, the whole industry, was using the tools in the way that he described how they should be used.<br />
<br />
My response was to simply say "they are all wrong."Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com26tag:blogger.com,1999:blog-9421494.post-85684363804296652522011-05-14T19:01:00.002-04:002011-05-15T11:05:44.211-04:00Thoughts on the StarEast 2011 conferenceI first attended <a href="http://www.sqe.com/StarEast/">StarEast</a> in 1999. I <i>remember </i>the day-long tutorial I attended (by Rick Craig), and two track sessions - one by Cem Kaner on hiring testers, and one by James Whittaker on "Exploiting a Broken Design Process." I know I attended other sessions but I don't have active memories of them any more. I do remember the <i>experience</i> of attending the conference - one of surprise and excitement. <i>Surprise</i> at seeing so many other people in the testing community with similar questions and problems as myself, and <i>excitement</i> at the speakers with lots of great information and advice to give.<br />
<br />
Fast forward to 2011 - I returned to StarEast, this time <a href="http://www.sqe.com/ConferenceArchive/StarEast2011/SpeakerIndex.html#paulcarvalho">as a speaker</a>. I suppose I didn't need to wait 12 years to return as a speaker. I didn't intentionally ignore the conference. I think I've been busy with other things and it just didn't come up - until last Fall when I received an invitation in my inbox to submit a proposal. I'm really glad I went.<br />
<br />
Some things were familiar - the beautiful hotel, the Florida sunshine, the amazingly fresh orange juice, and the basic conference format. One thing that was different for me this time around was the number of people/speakers that I new who were also speaking at the conference. After having attended and spoken at several other conferences over the years, I guess I have gotten to know many of the popular speakers.<br />
<br />
I was happy to see many more people speaking that I have never heard about before. That tells me that the community is still growing after all this time and that there are still many more people sharing their knowledge to help enlighten future generations of testing leaders. That's awesome!<br />
<br />
<a name='more'></a><br />
I was particularly surprised at the calibre of the Keynote speakers. I was genuinely inspired by every Keynote that I attended (I think I only missed one). (You can find the <a href="http://www.sqe.com/ConferenceArchive/StarEast2011/Schedule.html">2011 program online</a> with details of the speakers and talks but I don't believe they were video recorded.) The Keynote presentations I attended were by: Andy Kaufman, Naomi Karten, and Julie Gardiner. The majority of my #StarEast tweets were from these keynotes. Unfortunately, I missed the Keynote by Gojko Adzic (I heard it was good too), and I'm not including the "lightning" talks here although I did attend those and some were very good.<br />
<br />
I was excited to hear that <a href="http://lisacrispin.com/">Lisa Crispin</a> and <a href="http://janetgregory.ca/">Janet Gregory</a>, authors of <a href="http://www.agiletester.ca/">Agile Testing</a>, were attending the conference and am happy to have had the opportunity to meet them and speak with them! =) Lisa even attended my track session and wrote up a nice summary about it on the <a href="http://blog.softwaretestingclub.com/2011/05/report-from-stareast-2011/">Software Testing Club site</a>.<br />
<br />
I won't really say much about my session here. It went okay I guess. I haven't received the session feedback evaluations yet, but I can say that I was really happy when several people came up to me afterwards in the hallway to thank me for my talk. They said that they really enjoyed it. My favourite comment came from Nawwar, my track chair -- i.e. the person who introduces the speaker during the session. He went from asking me "who are you?" just before the session to "Wow. That was the best session I've seen during this whole conference!" Nice. Thanks. =)<br />
<br />
I had fun giving the talk - "<b><i>Real-time Test Design for Exploratory Testing</i></b>". Test Design is a topic that I am really passionate about and I can talk about for hours. Okay, days. ;-) I think the main thing that I wasn't terribly crazy about my talk was the format. When I talk about Test Design, it is usually when I am teaching it. So I found it hard to just <i>talk</i> about it and not have an interactive session with the attendees to give them a chance to <i>practice</i> it. Don't get me wrong - I had some interaction in my session, but not the sit-in-front-of-a-computer-and-try-things-out kind of way. If I give this talk again, I will try to find some way to make it <i>more</i> interactive. (I may need more glow-in-the-dark straws.)<br />
<br />
Back to the conference. I was pleasantly surprised to bump into a former employee - a tester on a team that I managed about a decade ago. Wow! Still in Testing after all this time. AND attending a conference too! Double rainbows! We've hooked another one! ;-)<br />
<br />
I've worked with many testers over the last decade and I hear from so few of them. I am always happy to meet them again and know that they are still doing and learning about Testing. I am happy to know when I help testers get better at what they do - I get ecstatic when I find out that they continue to see Testing as a Profession and participate in conferences and networking events!<br />
<br />
StarEast had a few more surprises for me. At the Vendor Expo I bumped into Rick Craig - the tutorial speaker from when I first attended StarEast in '99. I introduced myself and told him how I remembered him. He was both pleased and suddenly felt older. Ha ha. We had a good chat.<br />
<br />
Continuing my stroll through the Expo, the vendors were vendors. I still think some of the tools are the wrong ones for testers (like, totally and completely without value) but I did see some interesting ones that might have potential in certain circumstances.<br />
<br />
As an independent consultant, most vendors weren't interested in me. I don't represent some big company with deep pockets to shell out on their bloatware documentation tracking systems. Of all the vendors, only one stopped to talk to me and really understand what it was that I had difficulty with their tools. He was fascinated by my knowledge of Agile practices, how testers fit on Scrum teams, and why these tools don't help. He asked me for my card and said that he was interested in contacting me to see if I can help provide some feedback for a future generation of tools that might work better. I gave him my card. I'll be curious to see if he contacts me. =) I love to help - just ask!<br />
<br />
My final pleasant surprise, and ultimately the best reason for attending the StarEast conference, came between sessions and during the meal breaks. I made an effort to sit with people I didn't know and engage them in conversations. Mostly I was curious to know what they did and what brought them to the conference. I was blown away by the passion that many of these attendees have for Testing. Wow. Some described to me the problems they face at work, the inequalities from other development team members (both in status and in salary), and the barriers preventing them from doing more. And yet, here they were at the conference to "learn from the best", to find ways to provide more value, make life better for them, their teams and their organisations.<br />
<br />
They were dripping with hope and enthusiasm and I must say that I was overwhelmed with joy on more than one occasion following a conversation like this. If I had nothing else nice to say about the StarEast conference and received no other benefit from attending, witnessing the passion of the attendees was enough to fill my heart with passion and drive to get out there and do more, teach more!<br />
<br />
I have been exposed to a lot of negativity over the last decade in several of the companies I have worked at. It wears you down after a while. You begin to lose faith in yourself and in the [testing/quality/value] mission. Attending StarEast changed that for me and recharged my spirit in a way that I totally wasn't expecting.<br />
<br />
I was asked if I planned to speak at StarEast again next year. I don't know. I hadn't planned on it. I might not in 2012. We'll see about the following year or maybe I will speak at one of the other SQE conferences - like Better Software, Agile Development Practices or maybe even StarWest (since I haven't been to the Pacific coast yet).<br />
<br />
There is so much to do in this area, in my "local" part of the world. It seems I am gaining recognition everywhere but here and I need to do something about that. StarEast helped recharge me and I hope I enlightened or entertained some attendees of my track session on Test Design and Exploratory Testing. Time will tell if I had a lasting impression on anyone. I hope I did. That wasn't why I went, though, and I got so much more out of it than just the opportunity to share some of what I have learned over the years.<br />
<br />
I'm glad I went.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com2tag:blogger.com,1999:blog-9421494.post-90525803320134283922011-04-16T16:54:00.000-04:002011-04-16T16:54:45.957-04:00Reflection on my Testing workshop at the KWSQA Targeting Quality ConferenceAt this year's <a href="http://www.kwsqa.org/">KWSQA</a> Targeting Quality conference I gave a half-day workshop titled "Exploratory Testing Basics". I originally proposed that title since I thought it followed nicely from the shorter workshop I gave at the QAI TesTrek conference in Toronto last Fall. I thought to myself - I'd like to redo the exercises again, change up a few things and it should be a piece of cake.<br />
<br />
As the Winter months progressed into Spring, I began to worry about my workshop idea more and more. You see, the exercise I gave at the QAI conference, while fun and appropriate, only really covered one aspect of Exploratory Testing - a broader framework. Perhaps that isn't enough? What is enough, then? What makes up the "basics" of ET?<br />
<br />
You see, when I teach ET, it's usually one-on-one and I spend 2-3 days just to cover the basics. It takes me a few more days of pair testing and debriefing/coaching to help the new tester put everything into practice. It really is quite complex and a lot of ideas and models may seem abstract until you try them out and adjust with good feedback.<br />
<br />
<a name='more'></a><br />
One of the hardest parts, I feel, is trying to teach certain techniques when the tester doesn't see a need for it. For example, Pairwise Analysis. I was introduced to Pairwise Analysis as "Functional Analysis" about 12 years ago and I got it right away. I had done more complex mathematics in university, so it wasn't the math that was the hitch for me - it was knowing when it might be useful and then applying it.<br />
<br />
The first project I tried it on was when I was asked to perform Installation Testing for a desktop application. If you have ever done this kind of thing, you will know that there are *many* features and variations that all conspire to convince you that it is a daunting task that may consume every waking moment for weeks on end if you ever want to try and cover all of the possible combinations of systems, hardware, software, feature selections, and so on. Enter Functional/Pairwise Analysis. I did the math; came up with a set number of test scenarios; performed them; and reported my findings in record time -- only a few days instead of the customary 2 weeks that it had taken on previous releases.<br />
<br />
That was really cool! I had a new tool in my Tester's Tool belt and I couldn't wait to try it out again.<br />
<br />
Years passed and several failed attempts later to teach other testers this cool technique, I finally stumbled upon the idea of Just-In-Time teaching. That is, rather than try to teach a tester all the techniques and models that I have learned over two decades and cram them into a few days, I will wait until they are presented with a problem and introduce the appropriate technique then.<br />
<br />
There are two important take-aways for me with this JIT approach. First, it is really effective and the tester gets it - great! Second, it may take a <i>long time</i> before a tester is presented with the situations where certain techniques apply - not so great.<br />
<br />
Present day.<br />
<br />
So, what do I cover in a 3-hour workshop that I would consider 'good enough' to cover the basics of ET?<br />
<br />
The answer, of course, is that proper workshop coverage will be the gap in knowledge from where someone <b><i>is</i></b> to where you <b><i>want them to be</i></b>. Unfortunately, the knowledge/experience starting point for each individual attending my session will be vastly different, their needs for this information will be different, and unless they have specific concerns some of the ideas very likely won't stick.<br />
<br />
There are many unknowns in that equation. So how can I plan an outline to cover this unknown gap for unknown purpose(s) in a short amount of time? <i>Stress</i>.<br />
<br />
What did I decide to do? I didn't plan an outline. I took a page out of <a href="http://www.noop.nl/">Jurgen Appelo</a>'s book and I had the attendees self-organise and decide.<br />
<br />
I handed out index cards and asked each person to create a user story card with their goal for the session. We stuck them to a wall with the heading 'Backlog' on it. On another part of the wall I had a Task board with the headings 'To Do', 'In Progress' and 'Done'.<br />
<br />
I asked all the attendees to decide as a group, select the top 4 cards that they wanted to cover this session, and place them in the 'To Do' column. They didn't believe me at first when I asked them to do this and kept asking me to decide. It was really cool to watch the transformation happen and have them take ownership for deciding on the workshop goals.<br />
<br />
I read each card, picked one, moved it into the 'In Progress' column and began.<br />
<br />
So what did I actually cover? The attendees grouped most of the cards together into one big group and the one they picked said that they wanted to "gain an understanding of ET techniques."<br />
<br />
Several years ago, I worked in a Financial Services organisation and was faced with a few audits by Banks where they needed to understand what testing artefacts we produced and why they didn't match their traditional Test documentation expectations.<br />
<br />
Knowing that I couldn't just show them test session notes, test guides and other Exploratory Testing artefacts because they wouldn't understand what they were looking at, I created a presentation that had 4 parts:<br />
<br />
<ol><li>The Challenges in Testing</li>
<li>What is Agile Development?</li>
<li>An Overview of our Systems Testing Approach</li>
<li>Examples of Testing Artefacts we Create on Software Projects</li>
</ol><br />
The auditors wanted to see the last part, but I explained that I needed to cover the first three before they could understand what they saw.<br />
<br />
In the "ET Basics" workshop at the KWSQA conference, I covered the important aspects of the first three sections above (I have expanded upon the original presentation over the years), and supplemented that with some additional exercises to cover a critical aspect of Exploratory Testing - Test Design.<br />
<br />
Did I meet the objectives? I think so. I covered the underlying principles and ideas that form the basis of good testing, "exploratory" or otherwise. The next step would be to look at some specific models and structures and have them <i>perform</i> exploratory testing.<br />
<br />
While performing ET may be an exciting and enlightening activity, it is also a complex and challenging one that requires careful debrief. In other words, we will need more time.<br />
<br />
Will I give this same presentation again? Not sure. I might if that's what the attendees want to see. I'll let them decide. If they want to <i>practice</i> something instead, or focus on <i>managing</i> such a testing activity, then we will do exercises for that instead.<br />
<br />
I had fun in this workshop and learned some great things from an exercise I tried for the first time. I also thought the training video I showed was an appropriate fit for both a Friday afternoon and the topic covered. ;-) I can't wait to build upon what I've learned from this workshop experience and offer more in the future!<br />
<br />
Cheers! Paul.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com0tag:blogger.com,1999:blog-9421494.post-13756548100132474882011-03-23T23:58:00.000-04:002011-03-23T23:58:42.350-04:00Radiating Testing Information - Part 1This topic is one that I have been asked about many times over the years and I am long overdue for a detailed discussion of it. Back in 2006 I presented an <i>Experience Report </i>at the STiFS workshop in New York titled <i>"Low-Tech Testing Dashboard Revisited."</i> The content of that presentation will be in Part 2. To quote "The Do-Re-Mi Song" from the movie <i>The Sound of Music</i>, "Let's start at the very beginning, a very good place to start."<br />
<br />
I attended the StarEast conference in 1999 and there was a talk by James Bach titled "<b>A Low Tech Testing Dashboard</b>." This presentation clicked with me as I was managing several test teams at the time and it addressed a problem that I felt was important. I have used this communication tool many times ever since. If you are <i>not</i> familiar with it, I suggest you read through the <a href="http://www.satisfice.com/presentations/dashboard.pdf">PDF slides on the Satisfice web site</a> before you continue. Go ahead. I'll wait.<br />
<br />
In this review I will cover some of the <i>who, what, where, when, how</i> and <i>why</i> of the Low Tech Testing Dashboard (LTTD) through examples from past projects I have worked on. I expect your context is different, so my hope is that these examples may help you think about how you might apply this communication tool on your project.<br />
<br />
<a name='more'></a><br />
<b>Who are you? Whom is this for?</b><br />
<br />
I have used this tool as a single tester on a development team, and as a team lead/manager managing multiple concurrent teams and projects - using one dashboard for each project.<br />
<br />
The first time I used this dashboard, I was one of two testers working on a web application. We were both doing unscripted, risk-based Exploratory Testing and I needed a way to try and understand high-level testing progress since we had no other testing documents to use for reference. That is, it's hard to really understand what a tester means when they say they are "done" testing a feature and you have no way of really knowing if that means the same thing to both of you. Adopting the terminology and scales outlined in the LTTD presentation helped bring us together on the same page in describing progress.<br />
<br />
The <i><b>primary audience</b> </i>for this information was our test team. We stood around the board daily and used it to identify priorities and plans for the day. The <b> <i>secondary audience</i></b> was the development and project team. Our team was located next to the kitchen so the Testing Dashboard was in a highly-visible location. As soon as we identified our first "BLOCKED" item on the board, the dashboard became a tool for the development team to identify the immediate priorities to unblock our testing. We explained what 'Blocked' meant the first time it happened and then Dev automatically resolved these issues every time after that without additional prompting. Cool!<br />
<br />
I have used this board in Waterfall-type projects with a lot of scripted tests and documentation, and on Agile projects too. It is helpful for daily Scrum (stand-up) meetings as it can help you remember what you worked on, what you plan to do, and it identifies any blocking issues or risks that we need help with.<br />
<br />
<b>What is the LTTD?</b><br />
<br />
The name has two parts: (1) Low Tech, and (2) Testing Dashboard. They are both important. One describes <i><b>how</b></i> you communicate, and the other indicates <i><b>what</b></i> it is.<br />
<br />
In Agile parlance, I have heard this dashboard referred to as an "information radiator." From the <a href="http://www.agileadvice.com/archives/2005/05/information_rad.html">Agile Advice blog</a>, "an information radiator is a large display of critical team information that is continuously updated and located in a spot where the team can see it constantly." One may argue that the entire Testing function/role/job purpose is to radiate (good) information. I agree with that perspective but will save that topic for discussion another day. Here I will focus on the outward presentation of particular information that communicates important project status and quality details to the project team.<br />
<br />
The <i>Testing Dashboard</i> communicates different aspects or dimensions of the testing effort in a tabular format for everyone on the project team to see. The <i>Low Tech</i> part reminds you this information can and should be communicated using simple, readily available tools in a highly-visible area - such as a dry-erase board. You <i>could</i> make it high tech but you don't need to. In fact, in my experience, for a single co-located development team (i.e. where the whole team is in the same room), anything more high tech than a dry-erase board is likely a waste of time.<br />
<br />
The Testing Dashboard is novel because it communicates (i) Test Effort, (ii) Test Coverage, (iii) Quality Assessment, and (iv) current risks for each Product Area under test, all in one convenient location. It represents a snapshot of these dimensions for a given moment in time or for a particular build. The use of smiley emoticons for the Quality Assessment is particularly effective. I have seen it happen several times where a developer will draw little devil horns on a red unhappy face. This is good because it shows their interest in interacting with the board and it is an expression of their feelings - in a humourous way. Fun interaction like this is always a good thing in my opinion.<br />
<br />
Prior to using this dashboard, I was used to being asked for Test Coverage estimates (in terms of percentage tests complete and tests remaining) during Project Status Update meetings. This was likely a measure of the number of Test Plans, Test Cases, or Requirements covered using a Traceability Matrix of some kind. Those percentages always left me somewhat uneasy as I knew that precise measurements and estimates of such things don't reflect the reality of the remaining work to do.<br />
<br />
What I like about the LTTD "Test Coverage" scale is that it was like going from Digital to Analog (i.e. a Physics/Electronics analogy). Suddenly I have this dial that I can use to more <i>accurately</i> describe the coverage! This is a good thing. In my opinion, <i>precision</i> in calculating remaining work to do is a waste of time. If humans are involved in the process, especially good testers, then your precise estimates will almost always be wrong. (NB: there is an important difference between <i>accuracy</i> and <i>precision</i>. It's great to have both but if I have to choose I'd rather be more accurate than precise.)<br />
<br />
<b>Where do you put the LTTD?</b><br />
<br />
<div style="text-align: right;"><img alt="a low-tech testing dashboard in the hallway" border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinSJpIBi2SlHT3vSjIrlRU_ky3NurVquGlUjmQiBLfR6tseo2KCALdrlQ3YKiTuOMToc82AE99mEp4gW3iC63fD9yyGpWifHXCsm4ZPUPtHAdO38Fy9zjTMUlKr-jfBaxE_D4M/s320/LTTD_PC_03_location.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" width="320" /></div>As previously mentioned, an information radiator like this works best when it is in a highly-visible area. In one company, I placed the LTTD on a whiteboard next to the kitchen, which also happened to be on the way to the development area, so everyone in the company saw it sooner or later. In another company, we placed it on a rolling dry-erase board and positioned it at the entrance to the development area (see photo at right). Again, pretty hard to miss.<br />
<br />
I've had mixed success with making the Dashboard high tech. At one company, I put a copy of the Testing Dashboard on the wiki. We used Atlassian Confluence at the time and it worked very well. Whenever we updated the dry-erase board during our regular morning stand-up meetings, I would take a few minutes to update the wiki version too. I started receiving comments from team leads in other departments thanking me for the information because they could now get testing updates without having to come up to our floor and see the board directly.<br />
<br />
The nice thing about the wiki version was that we could create links to our test strategy pages, bug reports, and other helpful online information. It really does make a handy interactive <i>Testing cover page</i> or executive summary for each project!<br />
<br />
This is a good time to mention another tester I know who has implemented a high tech version of the dashboard. Marlena Compton wrote about her experiences with the LTTD and posted it on her blog - see <a href="http://marlenacompton.com/?p=733">background</a> and <a href="http://marlenacompton.com/?p=1894">CAST 2010 presentation</a>. I like what she's done with it.<br />
<br />
Another time at a different company, I tried putting the Testing Dashboard on a wiki to help facilitate the communication for a test team that was distributed geographically. It didn't work very well. Okay, it didn't work at all. Unfortunately, the test team didn't use the wiki for testing purposes. I believe <b><i>communication</i></b> in general was a problem on this particular project, so the effort was probably doomed from the start. I haven't ruled out a wiki-based Testing Dashboard as a helpful tool for distributed teams, so I plan to try again when the opportunity presents itself.<br />
<br />
<b>When do you update the dashboard?</b><br />
<br />
I create a fresh Testing Dashboard as soon as a project is kicked off. We add testing areas and features to the board as we hear about them and it is a good way to keep track of what is coming and when.<br />
<br />
The original presentation suggests updates 2-5 times per week depending on your needs. That sounds about right. One time we were so involved in our testing that we only updated it once a week. This lasted for several weeks. We noticed that the dashboard was failing as a useful, timely communication tool so we returned to more frequent updates. I recommend nothing less than 2 updates per week.<br />
<br />
If you are making noticeable testing progress then I would expect to update the board daily or maybe every other day. I find that it is a good place for the test team to gather around each morning and discuss what everyone is working on for the day. We can update it based on what we learned from the previous day, and it helps us to stay on top of the current priorities and risks as they come up.<br />
<br />
If you are doing a dedicated Exploratory Testing (ET) effort (i.e. as opposed to ET accidentally happening between <i>scripted</i> test cases), then a tool like this may be useful in helping to communicate testing progress. It's possible that you may already have other tools set up for tracking and communicating progress of your scripted or automated tests. I find that this dashboard communicates more than just progress through a count of tests, so I would still recommend it as a way to supplement your regular testing updates.<br />
<br />
<b>How do you build a Testing Dashboard?</b><br />
<br />
Remember that the key here is to build something simple, easy to maintain, and gets the message across clearly. As in Real Estate, the 3 most important things are: location, location, location. Once you have that, then what? Well, it's really up to you, the tools you have available, and your imagination.<br />
<br />
In the beginning, I found that drawing straight lines on white/dry-erase boards was a challenge. In the absence of a metre stick, I would grab whatever was handy and worked - e.g. the metal edge strip off a cubicle wall or even a pizza box. I found that the width of a whiteboard eraser is just right for horizontal line spacing. Then it's just a matter of filling in the table using the appropriate marker colours.<br />
<br />
One annoyance was that every time we updated the board we would always have to re-draw the lines. By the end of the project the dashboard would look pretty ugly. The answer came at another company when I learnt that a developer had brought in painter's tape to make their agile board.<br />
<br />
<img alt="thin green painter's tape" border="0" height="155" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzIiuCyS8c2uF2HntgKgX7IJf5H8FiziP7ue_ogaqpgbhdhVaa2VubutyU7cCUdsQ4Wdfx2YWvpTd5E3KFCfXe5KJBIZTapq25iVCFolcRHZgY2TXjOfEsaxPNlhv0NBPYxW_u/s200/painters_mate_green_6mm.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;" width="200" /><br />
In particular, he used <a href="http://paintersmategreen.com/">Painter's Mate Green</a>, 1/4 in x 60 yds or 6 mm x 55 m. It looks like this photo at right: <br />
<br />
The 6 mm width is just about the same width as a board marker and the best part is that you can erase over it and the lines are always there! The green is also more visible than standard yellow masking tape on a whiteboard. I have seen blue painter's tape too, but I could never find one thin enough to use. Check your local hardware store to see what you have available.<br />
<br />
So the tools I used to make the last few Testing Dashboards are: (1) 6 mm painters tape, (2) a post-it note (for spacing), and (3) colour markers.<br />
<br />
I start by marking the width of the whiteboard eraser on a post-it note, and then use the post-it note as a guide for making the lines across the board (see photos below). Follow the general table outline as shown in the presentation slides and you will get the general idea.<br />
<img alt="creating a dashboard" border="0" height="150" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPjGKQ4BnAdfG0rjPg1mJAzkAyFM3blch80avNA2x2_HI2bH0FyKwdA-AdBStdvLU_r5SU-y3MTYWLA9VgwtegkISaiRp5N_4gi0H1FTkjQIOkhr-LvoUbAVG6V1QJvzNbjyh_/s200/LTTD_PC_01.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;" width="200" /><br />
<img alt="close-up of how I space the lines without a ruler" border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKbFYghuNh3Xn_QdbihDAFvTNsudaCU-ocTHGSwnG3dcaQwN1l_Kxqd__XtXiAqmiQf-cFYYBYOk0TQxgQhiz31aKE0HriF6Qe-wlGYC7APGlgYiWRyK6l_V-2BSxj_Yr8lNe9/s320/LTTD_PC_02_close-up.png" style="margin-left: 1em; margin-right: 1em;" width="320" /><br />
<br />
A finished board looked like this a few weeks later:<br />
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6UIllGQcVmcA64AyWH0RK2VydR86_V94jFdc6R0mLR4j4niDBcpC6ojTvZb3Do81y0Dwn4-K5H0DPzspA-Lsb7jKNkKqy8TaFViHjJQ71ZFkKOEFrH9Ni_Z0_zenBc1L6HyEl/s1600/LTTD_PC_04_colours.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="a testing dashboard in use" border="0" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6UIllGQcVmcA64AyWH0RK2VydR86_V94jFdc6R0mLR4j4niDBcpC6ojTvZb3Do81y0Dwn4-K5H0DPzspA-Lsb7jKNkKqy8TaFViHjJQ71ZFkKOEFrH9Ni_Z0_zenBc1L6HyEl/s320/LTTD_PC_04_colours.png" width="320" /></a></div>I highly recommend using different-coloured markers when you create and update the dashboard. It makes the different meanings really stand out - i.e. it enhances <i> <b>how</b></i> you communicate the message. For example, when you write "<span style="color: red;">BLOCKED</span>" in big red letters and have a red unhappy smiley face next to some show-stopper bug report number, people notice.<br />
<br />
<b>Why use the LTTD? Why not?</b><br />
<br />
A good reason to use a tool like this is that it helps you and your test team to communicate important information to the project team without them having to ask you every time. In one sense, it does some work for you. For example, project managers can check the dashboard if they want an update and they don't have to interrupt you while you are testing. I don't know many tools that <i>add</i> time to my day. I am happy to say that this is one of them.<br />
<br />
If your role as a tester is to provide valuable information, then <i><b>radiate</b></i> information! Let everyone know how things are going about things that are important to them before they even ask! Is there other information that people care about? Add it to the board!<br />
<br />
The dashboard displays important info at a high level - a level that everyone on the project should understand. I have seen it happen many times when someone asks a tester how things are going and the tester gets lost in the details of the latest problem or issue that he or she encountered. The Testing Dashboard helps pull the tester up out of the details to put such events in a different perspective. Is that latest issue a blocking issue, or just another bug? Is it a show-stopper, or something that you require additional assistance from someone else on the team to help you investigate? What impact does this issue have on the overall quality of this feature or this release? How far along are we in testing this feature or all the features at this point?<br />
<br />
The answers to these questions are always immediately visible on the dashboard. While we are fascinated by the puzzles and challenges of the testing problems we face every day, it's nice to have a dashboard around to remind us how to communicate at a level that our customers need.<br />
<br />
So why <i>wouldn't</i> you want to use a board like this?<br />
<br />
This question is partly rhetorical. I can think of at least one good reason why you might not want to.<br />
<br />
For example, if the security policies of your organisation prevent you from leaving information like this up in a visible location. While making information <i>visible </i>is kind of the whole point of this dashboard, if your organisation won't allow it then you may wish to investigate other options - maybe even high-tech solutions. You have a team of smart people working with you. Get together and brainstorm a solution that works for you.<br />
<br />
There may be other conditions or times when this may not be the right tool for you. "Because it hasn't been done before" is <i><b>not</b></i> a good reason to avoid it. If you are looking to grow, if you are looking for new ways to provide more value in innovative, simple and effective ways, you should give this a try. What will <i>your</i> Testing Dashboard look like? What will it say?<br />
<br />
Radiate!<br />
<br />
<hr /><br />
Here ends Part 1, an introduction to the Low-Tech Testing Dashboard. In Part 2, I will delve deeper into some of the aspects of the dashboard - how we modified it in different projects, and how we integrated it with Session-Based Test Management.<br />
<br />
Before I post Part 2, I am interested to hear what you think. Have you tried using a Testing Dashboard like this before? Did it work for you? Are you thinking about trying it? Any other concerns that I haven't addressed yet? Please leave a comment and let me know what you think.<br />
<br />
Cheers! Paul.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com5tag:blogger.com,1999:blog-9421494.post-46991428766803541022011-01-29T01:56:00.001-05:002011-01-29T01:57:58.324-05:00Testing & Programming = Oil & WaterI was watching a science program just now and it occurred to me that Testing is very much science. And then I wondered about Programming.<br />
<br />
I started in IT over 22 years ago doing programming. For me, the process of programming broke down to three parts: figuring out the algorithm to solve the problem, implementing/coding the solution, and cleaning up the code (for whatever reason - e.g. maintainability, usability of UI, etc.). It gets more complicated than that of course, but I think that about sums it up the major activities as I saw them. (SIDE NOTE: I didn't write those to mirror TDD's Red-Green-Refactor, but it does align nicely that way.)<br />
<br />
When I think back on my experiences in programming, I don't see a lot of overlap with my experiences in Science (~ 8 years studying, researching and doing Physics & Environmental Science + teaching Science on top of that). Science is about answering questions. The Scientific Method provides a framework for asking and answering questions. Programming isn't about that. Building software isn't about that. I'm having difficulty at the moment trying to see how testing and programming go together.<br />
<br />
It occurs to me that schools and universities don't have any courses that teach students how to build software. It also occurs to me that schools and universities provide students with the opportunities to learn and develop the skills required to build software well. The schools just don't know they're doing that and consequently the students don't get that opportunity intentionally.<br />
<br />
I'm not talking about learning to program. That's trivial. Building software isn't about programming.<br />
<br />
<a name='more'></a><br />
Building software starts with an idea - an idea that someone will pay money for. School courses to watch for here include - Economics, Entrepreneurship.<br />
<br />
Building software requires people to work together. School courses that may apply - Business, Math/Finances, Psychology.<br />
<br />
Building software requires people to work under constraints. "Project Management" is not really taught in schools, but there are many courses available to the public. I found this really painful to learn by doing. A whole world of insights opened up when I took my first PjM course. Reinventing the wheel really is dumb here. I believe this should be formally taught in schools - in High School actually (the earlier, the better).<br />
<br />
Building software requires people to solve difficult problems creatively.<br />
<br />
This one is interesting. I think there are many opportunities for people to learn this skill in school. I know that we definitely covered this in Science. I also know that Engineering programs teach students how to do this. There are many more faculties and programs that this would apply to and they all have one thing in common - there's some formula or method for solving problems for some purpose.<br />
<br />
The thing is, that <i>purpose</i> is different in each case. See, here's where my mind is doing flip-flops.<br />
<br />
In Science, when we solve a problem, the outcome is usually more information. This information feeds back into the original idea to help us check the validity of our initial premise/hypothesis. Sure, there are moments of free-form exploratory investigation, but I don't believe that happens a lot. There are an infinite number of paths that any experiment may go in if you don't have a particular question in mind when you start, so unless you are trying to intentionally waste time and money, you will start with some question in mind before you start your experiment or investigation.<br />
<br />
In Engineering, solving a problem is different in that the outcome is usually something real, something tangible, some application or system that fills a need.<br />
<br />
The purpose of Science is <b><i>not</i></b> to build things but to answer questions. The purpose of Engineering is to build things - and to do so safely, ethically, within desired parameters for intended purpose, and so on.<br />
<br />
So where does that leave us in building software? *Building* software is definitely an Engineering task... and then some. I am temporarily over-simplifying the process of building software to focus only on the requirements gathering, design, coding and deployment phases.<br />
<br />
"Testing" comes from and is an integral part of Science, so how does it fit in with these software engineering/development phases of requirements gathering, design, coding and deployment? Well, it doesn't. It has nothing to do with them. And yet, it has everything to do with them.<br />
<br />
That is, from one perspective, at no time do you ever need to test anything to get through any of those phases.<br />
<br />
That last statement, while true, kind of goes against everything I ever learned in school. Whenever I did math problems, I always checked that I got the right answers against the solutions in the back of the textbooks. Why did I do this? Why did I care?<br />
<br />
I did it so that I could tell myself that *how* I solved the problem was correct - it got me the same answer that the textbook and my teacher cared about. I discovered there were exceptions, of course. That is, there were times when I got the correct answer but my method was wrong in some way. Dumb luck does play a role in life and that was when I first discovered the evil twins named Type I and Type II errors. (Side Note: I wonder if that's where Dr. Seuss got his idea for Thing 1 and Thing 2? Hmm..)<br />
<br />
So, the process of checking answers with the "approved" solutions, and handing in assignments for grading by teachers is a feedback mechanism to tell me that I've learned how to solve certain kinds of problems in ways that provide the desired results. Let's assume for a moment that's a good thing.<br />
<br />
Getting back to building software, you can go through requirements gathering, design, coding and deployment without ever once checking that you are producing the desired solution or results. In the end, this is a monumental waste of time and money, and is completely incongruous with the initial premise that you are building a product/service/solution that someone will pay money for. That's bad economics. That's psychotic.<br />
<br />
So how do we fix this? What's the problem here?<br />
<br />
Well, one problem is that we now have a <i>question</i>, we have <i>doubt</i> at the end of each of these phases. We have a desire to learn if the end result of each stage and for the whole process is meeting [our/someone's] expectations. The answer at the back of the textbook here will be provided by the people who are choosing to pay you and not your competitor for what you produce/release/ship.<br />
<br />
Hey! Wait a minute! Science helps us answers question! Testing is a small part of that process. The bigger process starts with a <b><i>question</i></b> that stems from some research or exploration of the initial area of interest. This hypothesis is the part we really care about. The "how do we go about gathering enough information to answer this question" part is something different and we should get people who know how to do this to either (a) do it for us, or (b) help us do it for ourselves. Then there is the analysis of that data or observations in context of the initial hypothesis.<br />
<br />
But this is a different layer now! We're adding a layer of "science" on top of "engineering". That's weird. That's like trying to mix oil and water together. That is, they <b><i>don't</i></b> mix. If you shake them up together you only end up with some cloudy mess that eventually will separate out again.<br />
<br />
So what does this mean for building software? We need people who are skilled at engineering solutions, and we need people who are skilled at identifying and answering questions about the solutions being engineered. I believe these are two, very separate skill sets required to be successful.<br />
<br />
However, my experience in the Software/IT industry over the last 20 years has been that only one of those skill sets has really been identified as important or relevant -- that of the programmer or engineer in building the solutions.<br />
<br />
This is a problem. There's a huge knowledge gap here.<br />
<br />
Schools don't teach you how to "test" in the context of software development. Every single Testing "certification" agency I have met to date misses the mark. They don't teach you the correct skills. They "teach" you superficial documentation skills that produce information *<b><i>like</i></b>* the kind of information that is required. That would be like me handing out Plumbing certificates to anyone who successfully completes the Mario & Luigi video games because these characters are plumbers in the games. That's just not right. Likewise, there is no actual "science" performed by teaching people to create scientific-like reports. (Although it happens sometimes that testers learn testing skills accidentally if they pay attention to what they're really doing.)<br />
<br />
So, where are we here? Building software requires layers of intelligent, creative effort and problem-solving abilities. These layers are complementary and require different skill sets. Just like you wouldn't hire an accountant to deploy your systems, I don't think it's wise to hire a programmer to provide valuable testing insights into the development process. It's the wrong skill set.<br />
<br />
Oil and water, or oil and vinegar - the analogy holds for me. Testing is something completely different from Programming and building software. It's a layer on top to help you know that what you are doing is on target for what your paying customers are expecting. Some might call that value.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com1tag:blogger.com,1999:blog-9421494.post-73117786584849940062011-01-10T12:17:00.002-05:002011-03-06T18:09:05.658-05:00Software Testing "Popcorn" buttonI made myself some microwave popcorn for a snack just now. Placed the popcorn bag in the microwave, pressed the 'popcorn' button and then 'start'. Someone next to me said: "There's a popcorn button?" Um, yes, there is. In fact, there has been a 'popcorn' button on every microwave oven I've ever seen.<br />
<br />
I explained to my colleague that the recommended time on the bag (in this case it was 2 min 30 sec) doesn't work on every oven. Different ovens have different power output and so the actual cook time may vary. If I go with the default time, it might burn or be under-done and leave too many unpopped kernels in the bag. You could figure out the correct time in a few ways.<br />
<br />
<a name='more'></a><br />
<u>Method 1: Math</u><br />
Start by taking a look at the power output of the oven. Full-size ovens deliver 1,000 - 1,600 watts of power, and mid-size ovens yield 800 - 1,000 watts. Higher wattage heats food more quickly. If it's not explicitly written on the bag, assume the default popcorn popping time is for a 1,000 watt oven.<br />
<br />
Use a time/power ratio along the lines of: ( t_your_micro_oven / P_your_micro_oven ) = ( 150 sec / 1000 W)<br />
<br />
I converted the 2:30 recommended time on the bag to 150 seconds for convenience. The power of your microwave oven should be written on the back somewhere, probably close to the plug. This leaves your popping time as the only unknown variable in the equation, so it should be straightforward to solve.<br />
<br />
That might work.<br />
<br />
<u>Method 2: Brute Force/Iterative</u><br />
Put the bag in the oven and follow the instructions exactly. Make no changes. When the time is complete, take the bag out and put the contents in a bowl.<br />
<br />
If you are happy with the result, congratulations! You are done. If not, we will need to change the cooking time for the next bag.<br />
<br />
On a piece of paper (or in a document on your computer if you prefer), make some observations to capture things like: time settings used, quality of the popcorn output, taste, number of unpopped kernels, and so on. Keep a note next to the microwave oven or on the container where you keep more popcorn bags -- a note to reference these 'test results' so that you can have something for comparison the next time.<br />
<br />
Basically, you know where I'm going with this. Pop the first bag with the default recommendations and keep varying the subsequent popping times until you are happy with the result. This will require "n" bags to iterate through until you are happy with the end result.<br />
<br />
This should eventually produce high-quality results. It may take you some time and will be costly to do it this way as you may have to go through many bags until you get it just right.<br />
<br />
<br />
<div style="margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;"><u>Method 3: Popcorn button</u></div><br />
Put the bag in the oven, press the 'popcorn' button, serve and enjoy.<br />
<br />
I'm not certain how this works. I shall call it "magic" for now. I can speculate that perhaps the microwave oven manufacturer has a team of engineers dedicated to the "Perfect Popcorn Production" using their equipment and are responsible for programming the correct power and time settings into their ovens.<br />
<br />
Maybe they use Method 1 above. Maybe they use a combination of methods 1 and 2 above. Maybe it's something else.<br />
<br />
The point here is that someone has already done the thinking for me so that I can focus on the quality of experience of using their oven.<br />
<br />
Wow. I have so many comparisons and analogies back to Software Testing running through my head right now.<br />
<br />
One that jumps to mind is development & testing terminology/jargon.<br />
<br />
<ul><li>When is a test not a test? When it is a check. When it is an inspection. When it is a question. and so on.</li>
<li>Are your [agile] development sprints 2 weeks? No? Why not? That's the default value, so it should work for you, right?</li>
</ul><br />
What if software organisations were responsible for their own 'popcorn' buttons? That is, where the 'popcorn button' is a way to communicate the methods and models used within their organisations that produce the desired 'quality' result.<br />
<br />
As a consultant, one of the first things I do is watch and listen to the development and project team members. I need to understand their terminology and way of doing things. That helps set a reference frame for me. If I want to make an improvement or change somewhere, I need to have an understanding of how to do that on <i>their</i> terms, not terms according to some industry standard or certification terminology dictionary that may not apply.<br />
<br />
What would be the point? Well, I think it would be handy if we could abstract out some of the desired practices from the terminology and implementation details that are specific and custom to every organisation, team or project.<br />
<br />
Now that I think about it. Maybe, we as consultants are the "Popcorn Programming Facilitators." That is, the company wants to implement an automated regression testing activity. What does that look like? How can that work for us? What kind of output do we get?<br />
<br />
Some days we identify the required popcorn buttons (e.g. Regression Testing, Bug Tracking System, Produce Status/Progress Reports). Some days we help program them.<br />
<br />
What do you think?Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com3tag:blogger.com,1999:blog-9421494.post-22903951552053481952010-11-12T19:34:00.002-05:002010-11-17T23:54:49.429-05:00Fishing for WisdomI just came back from a week at the <a href="http://www.ayeconference.com/">AYE Conference</a>. My head is full with several new ideas swimming around and stirring up half-baked old ideas - which is a good thing.<br />
<br />
One of the thoughts causing my head to spin came from a session one evening where we discussed ideas to improve the conference moving forward. Johanna Rothman led the session and at one point she mentioned that the AYE workshop sessions included <i>pure</i> [Virginia] Satir [ideas, models, etc.] and <i>applied</i> Satir. This got me thinking about some of the subtle differences I had noticed about the sessions and what they meant to me.<br />
<br />
In particular, some of the ideas and models I learned from the AYE sessions appear to dwell longer in my mind and apply to a broader spectrum of situations while others seem to be more specific - i.e. an application of a model in a particular context. Don't get me wrong, whether you choose to attend a pure Satir or applied Satir workshop at AYE (and the sessions aren't labelled as such because it doesn't really matter), it's a win-win scenario. :) Sure, different hosts have different styles, but each session is different every time so you sometimes see people attend the same session again to see what new insights they get.<br />
<br />
So, what's the big deal here? Why did I get stuck on a small point like this? Well, it reminded me of the time when I was in Teacher's College in the mid-90's, preparing to become a High School Science teacher.<br />
<a name='more'></a><br />
I had 2 main professors in Teacher's College - one for each of my 'teachable' subjects of Physics and Chemistry. Their styles were very different. One professor seemed to keep me busy while the other made me think a lot.<br />
<br />
I'll be honest, coming from a university environment to Teacher's College, I expected to be told what I needed to do to be a teacher. You know - I expected a lecture-style learning environment like most of my previous undergraduate courses. What I got didn't match that expectation so I was a bit frustrated at first until I discovered the secret that no one clearly explains to you.<br />
<br />
What's the secret? Okay, I'll tell you. The secret is that when you go to Teacher's College, *<b>you</b>* are the teacher, not the student. So, by going there acting like a student expecting to be taught, I had the perspective all wrong.<br />
<br />
I don't recall when the paradigm shift happened for me but I'm glad it did. After that point, I didn't consider my professors to be the ones <i>teaching</i> me to be a teacher; I saw them as <i>guides</i> to help me learn the things I needed to be a better teacher.<br />
<br />
The real teacher here is experience. And that you can't get unless you are <i>doing</i> what you want to be doing, not sitting in a classroom somewhere <i>talking about</i> what you want to be doing. So what did I get from the Teacher's College experience? I got access to many different 'teachers' to help me deconstruct my experiences so that I could learn from them.<br />
<br />
I need to pause for a second and think about that last sentence again. That sounds suspiciously a lot like the AYE conference experience to me. Hmm.<br />
<br />
So what was different between my two main professor/guides? I think (now) it was something similar to the difference between the 'pure Satir' and 'applied Satir' AYE sessions. One professor offered tips and ideas that applied in a certain situations - the ones we said mattered to us - while the other discussed models and ideas to help us learn from our own experiences in more broader situations. Looking back, those things weren't clearly stated in that way at the time. I think I understand a bit more now about how they were trying to help us, in different ways, to become better teachers. Of course, both professors provided us with lots of opportunities to practice demos and teaching short topics in an environment where we could safely receive feedback from our peers. (hmm, more AYE conference and PSL familiarity here.)<br />
<br />
I recall that someone once mentioned the old Chinese proverb while were at Teacher's College: <br />
<blockquote>Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.</blockquote><br />
To me, the quote makes me think of the difference between information and knowledge. I think both professors were trying to teach us to fish (i.e. give us knowledge), just in different ways. They both wanted us to leave the college more confident with knowledge that we could use to help ourselves become better teachers moving forward.<br />
<br />
Back to AYE and present day. Reflecting upon the learnings from this past week, I learned new models and ideas that I can apply in many ways - some apply at work and some apply to life beyond the workplace. I think I left with a few fish and a few new fishing techniques.<br />
<br />
The old proverb bugs me though. I don't think it completely captures the full experience and feeling of what happened. There's something missing, something <i>meta</i>.<br />
<br />
And then it came to me today. It's not the fish. It's not learning to fish either. It's the <i>fishing</i>.<br />
<br />
If you pay attention to what you are doing while you are fishing, I think that leads to something other than information or knowledge; it leads to wisdom.<br />
<br />
Continuing with this proverb as an analogy, if I learn deep-sea fishing while on vacation, I don't think that will help me very much if I decide to go fly-fishing at a local river. There are many different ways to fish - for the different kinds of fish, the environments in which they live, and the purpose of fishing (e.g. food vs sport). If we pay attention to more than just the types of fish, their environments and techniques to catch them, we can learn something more, something bigger. I find it hard to think in broad ideas like that sometimes. I also find those are the most rewarding moments though.<br />
<br />
Attending the AYE conference (and PSL this past Spring) was like that for me. My head doesn't stop thinking about ways to apply the models and ideas we experience at AYE. Meeting wonderful, intelligent, kind practitioners from all over the world helps enrich the shared experiences in ways that bring us closer together. We talk about fish (individual experiences) and techniques to help find solutions to problems we think we see. And then the hosts/speakers go and show us things that help us solve problems we didn't even think about or see!<br />
<br />
For me, attending AYE is an opportunity to meet old friends, learn about myself, learn about how to interact better with others (both at work and in personal life), share experiences and knowledge with colleagues, learn some new problem-solving techniques, make new friends, and pause for a moment to reflect upon where I am in life and where I'd like to be. It's a moment to notice that I'm fishing - I'm learning and growing. And that others are fishing too. And while some are fishing for similar things and others for different things, we all recognise that we are fishing so we have that in common.<br />
<br />
It's going to take me a bit of time to unpack all of the ideas I was exposed to this past week because learning happened on many different levels. I could post some notes, and I plan to, but I don't believe the notes alone can convey the experience of learning that happened there. Much learning happened between conference sessions too. You meet so many people with similar or related interests and different experiences that once you start talking in the hallways, over dinner or lounging about somewhere, you can't help but continue to learn and think about things in new ways.<br />
<br />
If you want, that is. If you're into that sort of thing. If learning, growing, and working better with other human beings isn't your cup of tea, then this conference is definitely *not* for you.<br />
<br />
To everyone else, I highly recommend the experience. This conference, and the PSL workshop, are opportunities that shouldn't be passed up. It might even change the way you think about things. It has for me.<br />
<br />
To the AYE and PSL hosts, Jerry, Don, Esther, Johanna and Steve, to my Teacher's College professors, Peter and Tom, and to my family, friends, and colleagues who have all provided me with helpful feedback and information to help me grow and be a better person, I thank you.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com0tag:blogger.com,1999:blog-9421494.post-89396279111530296732010-09-30T23:56:00.004-04:002010-11-17T23:56:06.199-05:00Using MS Outlook to support SBTMOkay, to recap, Session-Based Test Management (SBTM) is a test framework to help you manage and measure your Exploratory Testing (ET) effort. There are 4 basic elements that make this work: (1) Charter or mission (the <i>purpose</i> that drives the current testing effort), (2) Time-boxed periods (the 'sessions'), (3) Reviewable result, and (4) Debrief. There are many different ways that you might implement or apply these elements in your team or testing projects.<br />
<br />
Let's take a look at tracking the testing effort from strictly a Project Management perspective. Years ago, when I first became a test manager, I was introduced to the idea of the 60% 'productive' work day as a factor to consider when estimating effort applied to project schedules. That is, in a typical 8-hour workday you don't really get 8 complete, full hours of work from someone. I don't believe it's mentally possible to get that. The brain needs a break, as does the body, and there are <b>many</b> natural distractions in the workplace (meetings, email, breaks, support calls, stability of the code or environments, and so on), so the reality is that the number of productive working hours for each employee is actually something less than the total number of hours they're physically present in the workplace.<br />
<br />
That 'productivity' factor changes with each person, their role and responsibilities, the number of projects in the queue, and so on. Applying some statistical averaging to my past experiences, I find that 60% seems about right for a tester dedicated to a single project. I have worked with some teams that have been more productive and some much less.<br />
<br />
So what does this look like? If we consider an 8-hour day, 60% is 4.8 hours. I'm going to toss in an extra 15 minute break or distraction and say that it works out to about 4.5 hours of productive work from a focussed employee in a typical 8-hour day. Again, it depends on the person and the tasks that they're performing, so this is just an averaging factor.<br />
<a name='more'></a><br />
<br />
4.5 hours is an interesting number. In the SBTM framework, the "Time box" length for the test session revolves around a normalised value. The original SBTM presentation ("How to Measure Ad Hoc Testing") suggests that 1 "Normal" session = 90 minutes (+/- 15 minutes). Your "normal" time box session may be 90 minutes, it may be less. Customise it to suit your team's needs and fit.<br />
<br />
I'll use 90 minutes for now because it's the default recommended value to have a focussed test effort to get some solid testing done. 90 minutes. That's 1.5 hours. So 3 * Normal = 4.5 hours.<br />
<br />
Therefore, from a project management perspective, 3 Normal sessions is an average tester's target productivity level for completed test sessions in an average 8 hour day.<br />
<br />
It doesn't seem like a lot, three 90-minute sessions, but you'd be surprised at the number of distractions that happen throughout the day and development sprints/cycles that can prevent a tester from completing 3 sessions in a day.<br />
<br />
Recently, I've been working with a tester on an Agile team who has been facing a high distraction level. As a result of all the interruptions for quick checks, reviews, design meetings, and so on, he has been struggling to complete 3 sessions in a day.<br />
<br />
I should mention that I *AM NOT* asking him or anyone else on my team to complete 3 sessions in a day. I am not concerned with the numbers or the math of tracking the sessions completed, especially because the development team environment is very integrated and highly collaborative.<br />
<br />
What I am concerned about is trying to find ways to block off his time so that he can get some solid testing done, uninterrupted. One of the powerful aspects of the time-boxed session is to reduce or minimise the distractions so that your brain can maintain the focus on the testing problems at hand. The more the distractions and interruptions, the more likely that something important or interesting will be missed.<br />
<br />
Enter MS Outlook. When I look at <i>my</i> Outlook calendar it is riddled with meeting requests throughout the week. I have recurring meetings, impromptu meetings, lunch and learns, scheduled debriefs, sprint planning, retrospectives, demos and so on. I'm the Test Manager/Lead, so that's expected. When I look at my tester's calendar, it is fairly blank/open, save for the daily stand-ups and a few other weekly team meetings. It seems counter-intuitive that someone with an open calendar, so much available 'free' time, should be unable to block off time-boxed periods dedicated to testing specific features and charters.<br />
<br />
So I've started to schedule his test sessions in MS Outlook by scheduling Appointments. At the start of the day, we'll sit down and come up with 3 important charters that we'd like to cover in the day. We block off one session/appointment in the morning and two in the afternoon. The "Subject" is a summary of the charter and priority. If the session involves another Dev or team member, they can be included in the appointment, which is now a 'meeting request'.<br />
<br />
Outlook is a convenient tool since it (a) is available on everyone's desktop, (b) has built-in reminders, and (c) allows the tester to *see* all the free time available between scheduled test sessions. It's a way for him to try and regain control of the *uninterrupted* test sessions.<br />
<br />
SBTM isn't intended to be a time-tracking tool. It's a productivity-enhancing tool. By keeping focussed on the charter, you give your mind time to think about the important things -- to learn about the system and the software, to apply the appropriate tools and techniques, to make the best observations you can in the time you have available. Performing good Exploratory Testing is a delicate and complex thought process.<br />
<br />
I see my job as doing whatever I can to help provide the best environment possible for good testing to succeed. I never would have thought that Outlook would factor into that. Who knew?Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com1tag:blogger.com,1999:blog-9421494.post-62867546367730940692010-09-16T12:10:00.001-04:002010-11-17T23:56:52.580-05:00Test-Driven Development isn't newI used TDD as an analogy to a tester today to explain how logging bugs in a bug tracking system drives the development. A bug report represents a failing test (when you verify that it's really a bug that is) according to some stakeholder need/want.<br />
<br />
In Test-Driven Development, the programmer writes/automates the test <i>first</i> that represents the user story that the customer/user wants. The test fails. The programmer then writes enough code required to pass the test and then moves on. (refactoring code along the way, etc..)<br />
<br />
It's much the same with regular system testing (i.e. in the absence of agile/TDD practices) where a tester identifies and logs a bug in the bug tracking system. One difference is that these bug reports/tests aren't always automated. (Okay, I've never seen <b>anyone</b> automate these bug reports/tests before but I like to believe that some companies/dev teams out there actually <i>do</i> do this.) That doesn't change the fact that a bug report is the failing test. Even if it's a manual test, it drives the development change and then the bug report is checked/retested to see that the fix works as expected.<br />
<br />
Bug regression testing, then, is a requirement for good testing and system/software development, not an option.<br />
<br />
So, while the agile practices of TDD and others may seem new, I see this one as a retelling of a common tester-programmer practice. If anything, I see TDD as an opportunity to tighten/shorten/quicken the loop between testing feedback and development. With practice, TDD helps programmers develop the skills and habits they need to create code and systems with confidence -- to know that as the system grows, the specific needs of the customers are being met every step along the way. No one gets left behind.<br />
<br />
How can we, as testers, help? If your programmers don't practice TDD or automate tests, start investigating ways that you can do this. Investigate Open Source scripting languages. Engage your programmers in discussions of testability of the interfaces. There are many articles and presentations on the internet on the topics of test/check automation, frameworks and Domain Specific Languages (DSL).<br />
<br />
Start reading. Participate in discussions (in real life and online). Start developing scripting skills (I recommend Ruby, of course, especially to the tester newbie). If you don't feel confident with your programming skills, help hire someone onto your test team that can help all the testers advance their skills, knowledge, and productivity in that area.<br />
<br />
Be the <i>Quality Advocate</i> by putting your words into practice. You want your programmers to start practicing TDD? Show them how you can do it. You are already doing it - scripting/automating the checks that demonstrate a bug failure is just the next step.<br />
<br />
Start by automating a single bug failure. Take it from there.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com0tag:blogger.com,1999:blog-9421494.post-11071874869141049122010-09-03T00:41:00.000-04:002010-09-03T00:41:55.575-04:00Why New Year's Resolutions FailSomeone recently said something to me that made me think. He said that all New Year's resolutions fail because they come at the wrong time.<br />
<br />
You know what I mean by New Year's resolutions, right? It's those promises you make to yourself, and maybe to others, right around the end of December that you will change or improve yourself in some way in the new year.<br />
<br />
The sentiment may not be wrong, but the timing certainly is. The argument made was that January 1st isn't really the start of the new year - September is. You see, here in North America, whether you are in school or not, most businesses revolve around a "school year" structure of September to June, with July and August being the summer holiday months.<br />
<br />
So, if September is the start of the year, we can't make promises to change something in January. That's like starting a 2-week sprint (in Agile Development) and saying half-way through that you are going to have completely new objectives. It doesn't work that way. You already committed to delivering certain goals during the Sprint Planning session at the start.<br />
<br />
What's that? What if you didn't set any goals at the beginning of the Sprint/Year in September? Doesn't matter. The Sprint/year started anyway and you are in the middle of it. There's no way you are easily going to shift your life in a totally new direction half way through.<br />
<br />
So, the moral of the story is: if you want to make New Year's resolutions, make them in August, not in December. That way you are more likely to follow through with them as the year progresses.<br />
<br />
Hm, interesting.<br />
<br />
Of course, life changing events can happen any time. You don't need to make a resolution of any kind to change yourself and how you get along in the world. You just need to see yourself how you want to be, and live like you've already reached that goal.Paulhttp://www.blogger.com/profile/16826575269870573990noreply@blogger.com0