I was watching a science program just now and it occurred to me that Testing is very much science. And then I wondered about Programming.
I started in IT over 22 years ago doing programming. For me, the process of programming broke down to three parts: figuring out the algorithm to solve the problem, implementing/coding the solution, and cleaning up the code (for whatever reason - e.g. maintainability, usability of UI, etc.). It gets more complicated than that of course, but I think that about sums it up the major activities as I saw them. (SIDE NOTE: I didn't write those to mirror TDD's Red-Green-Refactor, but it does align nicely that way.)
When I think back on my experiences in programming, I don't see a lot of overlap with my experiences in Science (~ 8 years studying, researching and doing Physics & Environmental Science + teaching Science on top of that). Science is about answering questions. The Scientific Method provides a framework for asking and answering questions. Programming isn't about that. Building software isn't about that. I'm having difficulty at the moment trying to see how testing and programming go together.
It occurs to me that schools and universities don't have any courses that teach students how to build software. It also occurs to me that schools and universities provide students with the opportunities to learn and develop the skills required to build software well. The schools just don't know they're doing that and consequently the students don't get that opportunity intentionally.
I'm not talking about learning to program. That's trivial. Building software isn't about programming.
Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts
Test-Driven Development isn't new
I used TDD as an analogy to a tester today to explain how logging bugs in a bug tracking system drives the development. A bug report represents a failing test (when you verify that it's really a bug that is) according to some stakeholder need/want.
In Test-Driven Development, the programmer writes/automates the test first that represents the user story that the customer/user wants. The test fails. The programmer then writes enough code required to pass the test and then moves on. (refactoring code along the way, etc..)
It's much the same with regular system testing (i.e. in the absence of agile/TDD practices) where a tester identifies and logs a bug in the bug tracking system. One difference is that these bug reports/tests aren't always automated. (Okay, I've never seen anyone automate these bug reports/tests before but I like to believe that some companies/dev teams out there actually do do this.) That doesn't change the fact that a bug report is the failing test. Even if it's a manual test, it drives the development change and then the bug report is checked/retested to see that the fix works as expected.
Bug regression testing, then, is a requirement for good testing and system/software development, not an option.
So, while the agile practices of TDD and others may seem new, I see this one as a retelling of a common tester-programmer practice. If anything, I see TDD as an opportunity to tighten/shorten/quicken the loop between testing feedback and development. With practice, TDD helps programmers develop the skills and habits they need to create code and systems with confidence -- to know that as the system grows, the specific needs of the customers are being met every step along the way. No one gets left behind.
How can we, as testers, help? If your programmers don't practice TDD or automate tests, start investigating ways that you can do this. Investigate Open Source scripting languages. Engage your programmers in discussions of testability of the interfaces. There are many articles and presentations on the internet on the topics of test/check automation, frameworks and Domain Specific Languages (DSL).
Start reading. Participate in discussions (in real life and online). Start developing scripting skills (I recommend Ruby, of course, especially to the tester newbie). If you don't feel confident with your programming skills, help hire someone onto your test team that can help all the testers advance their skills, knowledge, and productivity in that area.
Be the Quality Advocate by putting your words into practice. You want your programmers to start practicing TDD? Show them how you can do it. You are already doing it - scripting/automating the checks that demonstrate a bug failure is just the next step.
Start by automating a single bug failure. Take it from there.
In Test-Driven Development, the programmer writes/automates the test first that represents the user story that the customer/user wants. The test fails. The programmer then writes enough code required to pass the test and then moves on. (refactoring code along the way, etc..)
It's much the same with regular system testing (i.e. in the absence of agile/TDD practices) where a tester identifies and logs a bug in the bug tracking system. One difference is that these bug reports/tests aren't always automated. (Okay, I've never seen anyone automate these bug reports/tests before but I like to believe that some companies/dev teams out there actually do do this.) That doesn't change the fact that a bug report is the failing test. Even if it's a manual test, it drives the development change and then the bug report is checked/retested to see that the fix works as expected.
Bug regression testing, then, is a requirement for good testing and system/software development, not an option.
So, while the agile practices of TDD and others may seem new, I see this one as a retelling of a common tester-programmer practice. If anything, I see TDD as an opportunity to tighten/shorten/quicken the loop between testing feedback and development. With practice, TDD helps programmers develop the skills and habits they need to create code and systems with confidence -- to know that as the system grows, the specific needs of the customers are being met every step along the way. No one gets left behind.
How can we, as testers, help? If your programmers don't practice TDD or automate tests, start investigating ways that you can do this. Investigate Open Source scripting languages. Engage your programmers in discussions of testability of the interfaces. There are many articles and presentations on the internet on the topics of test/check automation, frameworks and Domain Specific Languages (DSL).
Start reading. Participate in discussions (in real life and online). Start developing scripting skills (I recommend Ruby, of course, especially to the tester newbie). If you don't feel confident with your programming skills, help hire someone onto your test team that can help all the testers advance their skills, knowledge, and productivity in that area.
Be the Quality Advocate by putting your words into practice. You want your programmers to start practicing TDD? Show them how you can do it. You are already doing it - scripting/automating the checks that demonstrate a bug failure is just the next step.
Start by automating a single bug failure. Take it from there.
Subscribe to:
Comments (Atom)